How to use multisampling with an MTKView? - swift

I'm trying to get multisampling working with MTKView. I have an MTKView with a delegate. I set the view's sampleCount property to 4. I create a pipeline state descriptor with the rasterSampleCount set to 4, and use that to make a render pipeline state that I use when rendering.
In the delegate's draw(in:) method, I create a render pass descriptor by getting the view's current render pass descriptor and setting the storeAction to multisampleResolve. I've also set tried storeAndMultisampleResolve to no avail.
I have created a resolve texture for the render pass descriptor, and it is the same width and height as the view and the same pixel format.
Given the above, I get a full red frame during rendering. I have used the metal debugger to look at the textures, and both the view's texture and the resolve texture have the correct rendering in them. I'm on an AMD machine where a fully red texture often indicates an uninitialized texture.
Is there anything I need to do to get the rendering to go to the screen?
Here's how I'm setting up the view, pipeline state, and resolve texture:
metalView = newMetalView
metalView.sampleCount = 4
metalView.clearColor = MTLClearColor(red: 0.0, green: 0.0, blue: 0.0, alpha: 0.0)
device = newMetalView.device!
let metalLibrary = device.makeDefaultLibrary()!
let vertexFunction = metalLibrary.makeFunction(name: "vertexShader")
let fragmentFunction = metalLibrary.makeFunction(name: "fragmentShader")
let pipelineStateDescriptor = MTLRenderPipelineDescriptor.init()
pipelineStateDescriptor.label = "Particle Renderer"
pipelineStateDescriptor.vertexFunction = vertexFunction
pipelineStateDescriptor.fragmentFunction = fragmentFunction
pipelineStateDescriptor.colorAttachments [ 0 ].pixelFormat = metalView.colorPixelFormat
pipelineStateDescriptor.rasterSampleCount = 4
do {
try pipelineState = device.makeRenderPipelineState(descriptor: pipelineStateDescriptor)
} catch {
NSLog("Unable to create pipeline state")
pipelineState = nil
}
let textureDescriptor = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: metalView.colorPixelFormat, width: Int(metalView.bounds.width), height: Int(metalView.bounds.height), mipmapped: false)
resolveTexture = device.makeTexture(descriptor: textureDescriptor)!
And here's how I'm drawing:
let commandBuffer = commandQueue.makeCommandBuffer()
commandBuffer?.label = "Partcle Command Buffer"
let renderPassDescriptor = metalView.currentRenderPassDescriptor
renderPassDescriptor?.colorAttachments[0].clearColor = MTLClearColorMake(0.0, 0.0, 0.0, 0.0)
renderPassDescriptor?.colorAttachments[0].loadAction = MTLLoadAction.clear
renderPassDescriptor?.colorAttachments[0].storeAction = MTLStoreAction.multisampleResolve
renderPassDescriptor?.colorAttachments[0].resolveTexture = resolveTexture
let renderEncoder = commandBuffer?.makeRenderCommandEncoder(descriptor: renderPassDescriptor!)
renderEncoder?.label = "Particle Render Encoder"
renderEncoder?.setViewport(MTLViewport(originX: 0.0, originY: 0.0, width: Double(viewportSize.x), height: Double(viewportSize.y), znear: -1.0, zfar: 1.0))
renderEncoder?.setRenderPipelineState(pipelineState!);
Then I make my draw calls, and then finish up by calling:
renderEncoder?.endEncoding()
commandBuffer?.present(metalView.currentDrawable!)
commandBuffer?.commit()
Here's what the debugger shows is in my textures:
Oddly, while doing that debugging, I accidentally hid Xcode, and for 1 frame, the view showed the correct texture.

What's the initial configuration of renderPassDescriptor (as returned from metalView.currentRenderPassDescriptor?
I believe you want the color attachment's texture set to metalView.multisampleColorTexture and its resolveTexture set to metalView.currentDrawable.texture. That is, it should do the primary, multi-sampled rendering to the multi-sample texture and then that gets resolved to the drawable texture to actually draw it in the view.
I don't know if MTKView sets up its currentRenderPassDescriptor like that automatically when there's a sampleCount > 1. Ideally, it would.

Related

Blur face in face detection in vision kit

I'm using Apple tutorial about face detection in vision kit in a live camera feed, not an image.
https://developer.apple.com/documentation/vision/tracking_the_user_s_face_in_real_time
It detects the face and adds some lines using CAShapeLayer to draw lines between different parts of the face.
fileprivate func setupVisionDrawingLayers() {
let captureDeviceResolution = self.captureDeviceResolution
let captureDeviceBounds = CGRect(x: 0,
y: 0,
width: captureDeviceResolution.width,
height: captureDeviceResolution.height)
let captureDeviceBoundsCenterPoint = CGPoint(x: captureDeviceBounds.midX,
y: captureDeviceBounds.midY)
let normalizedCenterPoint = CGPoint(x: 0.5, y: 0.5)
guard let rootLayer = self.rootLayer else {
self.presentErrorAlert(message: "view was not property initialized")
return
}
let overlayLayer = CALayer()
overlayLayer.name = "DetectionOverlay"
overlayLayer.masksToBounds = true
overlayLayer.anchorPoint = normalizedCenterPoint
overlayLayer.bounds = captureDeviceBounds
overlayLayer.position = CGPoint(x: rootLayer.bounds.midX, y: rootLayer.bounds.midY)
let faceRectangleShapeLayer = CAShapeLayer()
faceRectangleShapeLayer.name = "RectangleOutlineLayer"
faceRectangleShapeLayer.bounds = captureDeviceBounds
faceRectangleShapeLayer.anchorPoint = normalizedCenterPoint
faceRectangleShapeLayer.position = captureDeviceBoundsCenterPoint
faceRectangleShapeLayer.fillColor = nil
faceRectangleShapeLayer.strokeColor = UIColor.green.withAlphaComponent(0.7).cgColor
faceRectangleShapeLayer.lineWidth = 5
faceRectangleShapeLayer.shadowOpacity = 0.7
faceRectangleShapeLayer.shadowRadius = 5
let faceLandmarksShapeLayer = CAShapeLayer()
faceLandmarksShapeLayer.name = "FaceLandmarksLayer"
faceLandmarksShapeLayer.bounds = captureDeviceBounds
faceLandmarksShapeLayer.anchorPoint = normalizedCenterPoint
faceLandmarksShapeLayer.position = captureDeviceBoundsCenterPoint
faceLandmarksShapeLayer.fillColor = nil
faceLandmarksShapeLayer.strokeColor = UIColor.yellow.withAlphaComponent(0.7).cgColor
faceLandmarksShapeLayer.lineWidth = 3
faceLandmarksShapeLayer.shadowOpacity = 0.7
faceLandmarksShapeLayer.shadowRadius = 5
overlayLayer.addSublayer(faceRectangleShapeLayer)
faceRectangleShapeLayer.addSublayer(faceLandmarksShapeLayer)
rootLayer.addSublayer(overlayLayer)
self.detectionOverlayLayer = overlayLayer
self.detectedFaceRectangleShapeLayer = faceRectangleShapeLayer
self.detectedFaceLandmarksShapeLayer = faceLandmarksShapeLayer
self.updateLayerGeometry()
}
How can I fill inside the lines (different part of face) with a blurry view? I need to blur the face.
You could try placing a UIVisualEffectView on top of your video feed, and then adding a masking CAShapeLayer to that UIVisualEffectView. I don't know if that would work or not.
The docs on UIVisualEffectView say:
When using the UIVisualEffectView class, avoid alpha values that are less than 1. Creating views that are partially transparent causes the system to combine the view and all the associated subviews during an offscreen render pass. UIVisualEffectView objects need to be combined as part of the content they are layered on top of in order to look correct. Setting the alpha to less than 1 on the visual effect view or any of its superviews causes many effects to look incorrect or not show up at all.
I don't know if using a mask layer on a visual effect view would cause the same rendering problems or not. You'd have to try it. (And be sure to try it on a range of different hardware, since the rendering performance varies quite a bit between different versions of Apple's chipsets.)
You could also try using a shape layer filled with visual hash or a "pixellated" pattern instead of blurring. That would be faster and probably render more reliably.
Note that face detection tends to be a little jumpy. It might drop out for a few frames, or lag on quick pans or change of scene. If you're trying to hide people's faces in a live feed for privacy, it might not be reliable. It would only take a few un-blurred frames for somebody's identity to be revealed.

SceneKit – Loading HDR or EXR lightingEnvironment has no effect

I tried loading an .hdr file to use it as a skybox and use its lighting informations. This is the code I used:
backgroundColor = UIColor.gray
// check if a default skybox is added
let environment = UIImage(named: "studio_small_09_2k.hdr")
scene?.lightingEnvironment.contents = environment
scene?.lightingEnvironment.intensity = 1.0
scene?.background.contents = environment
Unfortunately I recieve a grey screen and also no errors. Has anyone experience in using hdr files in SceneKit?
XCode Version: 13.2.1
iOS version: 15.3.1
hdr file: https://polyhaven.com/a/studio_small_09
I usually use a Cube Texture Set, where each of 6 images is square (height == width).
Also, the following cube map representations are supported:
Vertical strip as single image (height == 6 * width)
Horizontal strip as single image (6 * height == width)
Spherical projection as single image (2 * height == width)
Here's a SwiftUI code:
func makeUIView(context: Context) -> SCNView {
let sceneView = SCNView(frame: .zero)
sceneView.scene = SCNScene()
// if EXR or HDR is 2:1 spherical map, it really meets the requirements
sceneView.scene?.lightingEnvironment.contents = UIImage(named: "std.exr")
sceneView.backgroundColor = .black
sceneView.autoenablesDefaultLighting = true
sceneView.allowsCameraControl = true
let node = SCNNode()
node.geometry = SCNSphere(radius: 0.1)
node.geometry?.firstMaterial?.lightingModel = .physicallyBased
node.geometry?.firstMaterial?.metalness.contents = 1.0
sceneView.scene?.rootNode.addChildNode(node)
return sceneView
}
Pay particular attention – you need .physicallyBased lighting model to get HDR or EXR reflections.
And let's set it for BG:
sceneView.scene?.background.contents = UIImage(named: "std.exr")
Why your .exr doesn't work?
The solutions is simple: delete your .exr from project, empty the Trash and after that drag-and-drop .exr file, in Choose options for adding these files window choose Add to targets:
Now your .exr must work.

Merge CAShapeLayer into CVPixelBuffer

I'm capturing the output of a playing video using AVPlayerItemVideoOutput.copyPixelBuffer
I'm able to convert the pixel buffer into a CIImage, then render it back into a pixel buffer again, and then an AVAssetWriter writes the buffer stream out to a new movie clip successfully.
The reason I'm converting to CIImage is I want to do some manipulation of each frame. (So far I don't understand how to manipulate pixel buffers directly).
In this case I want to overlay a "scribble" style drawing that the user does with their finger. While the video plays, they can draw over it. I'm capturing this drawing successfully into a CAShapeLayer.
The code below outputs just the overlay CAShapeLayer successfully. When I try to reincorporate the original frame by uncommenting the lines shown, the entire process bogs down drastically and drops from 60fps to an unstable 10fps or so on an iPhone 12. I get stable 60fps in all cases except when I uncomment that code.
What's the best way to incorporate the shape layer into this stream of pixel buffers in 60fps "real time"?
Note: some of this code is not finalized -- setting bounds correctly, etc. However this is not related to my question and I'm aware that has to be done. The rotation/translation are there to orient the shape layer -- this all works for now.
func addShapesToBuffer(buffer: CVPixelBuffer, shapeLayer: CAShapeLayer) -> CVPixelBuffer? {
let coreImage = CoreImage.CIImage.init(cvImageBuffer: buffer)
let newBuffer = getBuffer(from: coreImage)
CVPixelBufferLockBaseAddress(newBuffer!, [])
let rect = CGRect(origin: CGPoint.zero, size: CGSize(width: 800, height: 390))
shapeLayer.shouldRasterize = true
shapeLayer.rasterizationScale = UIScreen.main.scale
shapeLayer.backgroundColor = UIColor.clear.cgColor
let renderer = UIGraphicsImageRenderer(size: rect.size)
let uiImageDrawing = renderer.image {
context in
// let videoImage = UIImage(ciImage: coreImage)
// videoImage.draw(in: rect)
let cgContext = context.cgContext
cgContext.rotate(by: deg2rad(-90))
cgContext.translateBy(x: -390, y: 0)
return shapeLayer.render(in: cgContext)
}
let ciContext = CIContext()
let newImage = CIImage(cgImage: uiImageDrawing.cgImage!)
ciContext.render(_: newImage, to: newBuffer!)
CVPixelBufferUnlockBaseAddress(newBuffer!, [])
return newBuffer
}

SCNParticleSystem: animating "particleColor" property in code

I would like to animate a certain color sequence in a SceneKit Particle System in code only. (Like one can do within the Particle System Editor in XCode.)
I was trying the following, and the App is crashing each time the Animation is attached to the particle system. The compiler does not complain, Xcode does not indicate any error within the syntax. (without the animation part, the particle system works fine)
func particleSystemTesting(shape: SCNGeometry) -> SCNParticleSystem {
let explosion = SCNParticleSystem(named: "explosion.scnp", inDirectory: nil)!
explosion.emitterShape = shape
explosion.birthLocation = .surface
explosion.birthDirection = .surfaceNormal
explosion.isAffectedByGravity = true
explosion.isLightingEnabled = false
explosion.loops = true
explosion.sortingMode = .none
explosion.isBlackPassEnabled = true
explosion.blendMode = .additive
explosion.particleColor = UIColor.white // using as default, should even not be required
// Animation Part
let animation = CABasicAnimation(keyPath: "particleColor")
animation.fromValue = UIColor.blue
animation.toValue = UIColor.red
animation.duration = 2.0
animation.isRemovedOnCompletion = true
explosion.addAnimation(animation, forKey: nil) // causing the crash
return explosion
}
This are the errors the console is printing out:
2020-08-23 18:25:53.281120+0200 MyTestApp[1684:892874] Metal GPU Frame Capture Enabled
2020-08-23 18:25:53.281349+0200 MyTestApp[1684:892874] Metal API Validation Enabled
2020-08-23 18:25:56.563208+0200 MyTestApp[1684:892874] -[SCNParticleSystem copyAnimationChannelForKeyPath:animation:]: unrecognized selector sent to instance 0x101041500
2020-08-23 18:25:56.564524+0200 MyTestApp[1684:892874] *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '-[SCNParticleSystem copyAnimationChannelForKeyPath:animation:]: unrecognized selector sent to instance 0x101041500'
*** First throw call stack:
(0x1998d5654 0x1995f7bcc 0x1997d9dd8 0x1998d97f8 0x1998db71c 0x1ade571f8 0x1ade55f24 0x1ade59268 0x1ade5af2c 0x1ade23d84 0x1adf18cf0 0x199852f2c 0x19984de20 0x19984e29c 0x19984dba8 0x1a39bd344 0x19d9893e4 0x100566384 0x1996d58f0)
libc++abi.dylib: terminating with uncaught exception of type NSException
After some more research and consulting apples docs reference it seems that my first approach was wrong, because that would affect all spawned particles at the same time.
BUT IT STILL DOES NOT WORK AS EXPECTED - all particles are gone/invisible on scene - otherwise no errors. this should change the colours of each particle over time. What is wrong?
func particleSystemTesting(shape: SCNGeometry) -> SCNParticleSystem {
let explosion = SCNParticleSystem(named: "explosion_testing.scnp", inDirectory: nil)!
// let explosion = SCNParticleSystem() // makes no difference
explosion.emitterShape = shape
explosion.birthLocation = .surface
explosion.birthDirection = .surfaceNormal
explosion.isAffectedByGravity = true
explosion.isLightingEnabled = false
explosion.loops = true
explosion.sortingMode = .none
explosion.isBlackPassEnabled = true
explosion.blendMode = .additive
// explosion.particleColor = UIColor.black
let red = UIColor.red
let green = UIColor.green
let blue = UIColor.blue
let yellow = UIColor.yellow
let color1 = SCNVector4(red.redValue, red.greenValue, red.blueValue, 1.0)
let color2 = SCNVector4(green.redValue, green.greenValue, green.blueValue, 1.0)
let color3 = SCNVector4(blue.redValue, blue.greenValue, blue.blueValue, 1.0)
let color4 = SCNVector4(yellow.redValue, yellow.greenValue, yellow.blueValue, 1.0)
let animation = CAKeyframeAnimation()
// animation.keyPath = "color" // has like no effect (?...)
animation.values = [color1,color2,color3,color4]
animation.keyTimes = [0, 0.333, 0.666, 1]
animation.duration = 2.0
animation.isAdditive = false // should overwrite default colours
animation.isRemovedOnCompletion = true
let colorController = SCNParticlePropertyController(animation: animation)
explosion.propertyControllers = [SCNParticleSystem.ParticleProperty.color: colorController]
return explosion
}
Apple Docs says this:
Apple Discussion
This property’s value is a four-component vector (an NSValue object containing an SCNVector4 value for particle property controllers, or an array of four float values for particle event or modifier blocks).
The particle system’s particleColor and particleColorVariation properties determine the initial color for each particle.
I believe the documentation is wrong. Your animation values array should contain UIColor objects.
animation.values = [UIColor.red, UIColor.green, UIColor.blue, UIColor.yellow]
Interesting problem
This: https://developer.apple.com/documentation/scenekit/animation/animating_scenekit_content shows a similar call with an SCNTransaction, so it looks like what you are doing "should" work. However, it wasn't an SCNParticle system component. Particles can be a bit tricky to work with. I'm not great with selector stuff, so I can't really tell from just looking at the code.
particleColorVaration - vector randomizes the color specified. That doesn't sound exactly like what you want to do, but it might get you close - vector specifies ranges (hue, saturation, brightness, alpha) in that order.

How to make particles in SCNParticleSystem opaque?

I created a SceneKit Scene File > Particle System and I can't figure out how to make all the particles opaque. The default particles alpha setting seems random. I change the image and a few other properties, and took a screen shot:
I've tried:
particle.particleColorVariation = SCNVector4(0, 0, 0, 0)
Which only makes the particles around 80%-90% transparent, but I cannot get it 100% opaque.
To make a particle system be fully opaque you need to set a blendMode instance property to .alpha (default value is .additive) and sortingMode instance property set to .distance (default value is .none)
var blendMode: SCNParticleBlendMode { get set }
var sortingMode: SCNParticleSortingMode { get set }
According to Apple documentation:
.blendMode is the blending mode for compositing particle images into the rendered scene.
There are six compositing blend modes for particles in SceneKit:
.additive
.alpha
.multiply
.replace
.screen
.subtract
Here's how it looks in real code:
let scnView = self.view as! SCNView
scnView.scene = scene
scnView.allowsCameraControl = true
scnView.backgroundColor = NSColor.black
let particleSystem = SCNParticleSystem()
particleSystem.birthRate = 5
particleSystem.blendMode = .alpha // 100% opaque if alpha = 1.0
particleSystem.sortingMode = .distance
particleSystem.particleSize = 1.0
particleSystem.emitterShape = SCNSphere(radius: 5)
particleSystem.particleLifeSpan = 100
particleSystem.particleColor = .red
// No Alpha variation
particleSystem.particleColorVariation = SCNVector4(1, 1, 1, 0)
let particlesNode = SCNNode()
particlesNode.addParticleSystem(particleSystem)
scnView.scene!.rootNode.addChildNode(particlesNode)