How to combine SCNRenderer with an existing MTLCommandBuffer? - swift

I successfully integrated the Vuforia SDK Image Target Tracking feature into an iOS project by combining the OpenGL context (EAGLContext) that the SDK provides, with an instance of SceneKit's SCNRenderer. That allowed me to leverage the simplicity of the SceneKit's 3D API and at the same time benefiting from Vuforia's high precision image detection. Now, I'd like to do the same by replacing OpenGL with Metal.
Some background story
I was able to draw SceneKit objects on top of the live video texture drawn by Vuforia using OpenGL without major problems.
Here's the simplified setup I used with OpenGL:
func configureRenderer(for context: EAGLContext) {
self.renderer = SCNRenderer(context: context, options: nil)
self.scene = SCNScene()
renderer.scene = scene
// other scenekit setup
}
func render() {
// manipulate scenekit nodes
renderer.render(atTime: CFAbsoluteTimeGetCurrent())
}
Apple deprecates OpenGL on iOS 12
Since Apple announced that it is deprecating OpenGL on iOS 12, I figured it would be a good idea to try to migrate this project to use the Metal instead of OpenGL.
That should be simple in theory as Vuforia supports Metal out of the box. However, when trying to integrate it, I hit the wall.
The question
The view seems to ever only render results of the SceneKit renderer, or the textures encoded by Vuforia, but never both at the same time. It depends what is encoded first. What do I have to do to blend both results togeter?
Here's the problematic setup in a nutshell:
func configureRenderer(for device: MTLDevice) {
let renderer = SCNRenderer(device: device, options: nil)
self.scene = SCNScene()
renderer.scene = scene
// other scenekit setup
}
func render(viewport: CGRect, commandBuffer: MTLCommandBuffer, drawable: CAMetalDrawable) {
// manipulate scenekit nodes
let renderPassDescriptor = MTLRenderPassDescriptor()
renderPassDescriptor.colorAttachments[0].texture = drawable.texture
renderPassDescriptor.colorAttachments[0].loadAction = .load
renderPassDescriptor.colorAttachments[0].storeAction = .store
renderPassDescriptor.colorAttachments[0].clearColor = MTLClearColor(red: 0.0, green: 0, blue: 0, alpha: 0)
renderer!.render(withViewport: viewport, commandBuffer: commandBuffer, passDescriptor: renderPassDescriptor)
}
I tried calling render either after encoder.endEncoding or before the commandBuffer.renderCommandEncoderWithDescriptor :
metalDevice = MTLCreateSystemDefaultDevice();
metalCommandQueue = [metalDevice newCommandQueue];
id<MTLCommandBuffer>commandBuffer = [metalCommandQueue commandBuffer];
//// -----> call the `render(viewport:commandBuffer:drawable) here <------- \\\\
id<MTLRenderCommandEncoder> encoder = [commandBuffer renderCommandEncoderWithDescriptor:renderPassDescriptor];
// calls to encoder to render textures from Vuforia
[encoder endEncoding];
//// -----> or here <------- \\\\
[commandBuffer presentDrawable:drawable];
[commandBuffer commit];
In either case, I only see results of SCNRenderer OR results of the encoder, but never both in the same view.
It seems to me as if the encoding pass above, and the SCNRenderer.render, are overwriting each other's buffers.
What am I missing here?

I think I've found an answer.
I am rendering scnrenderer after endEncoding, but I'm creating a new descriptor.
// Pass Metal context data to Vuforia Engine (we may have changed the encoder since
// calling Vuforia::Renderer::begin)
finishRender(UnsafeMutableRawPointer(Unmanaged.passRetained(drawable!.texture).toOpaque()), UnsafeMutableRawPointer(Unmanaged.passRetained(encoder!).toOpaque()))
// ========== Finish Metal rendering ==========
encoder?.endEncoding()
// Commit the rendering commands
// Command completed handler
commandBuffer?.addCompletedHandler { _ in self.mCommandExecutingSemaphore.signal()}
let screenSize = UIScreen.main.bounds.size
let newDescriptor = MTLRenderPassDescriptor()
// Draw to the drawable's texture
newDescriptor.colorAttachments[0].texture = drawable?.texture
// Store the data in the texture when rendering is complete
newDescriptor.colorAttachments[0].storeAction = MTLStoreAction.store
// Use textureDepth for depth operations.
newDescriptor.depthAttachment.texture = mDepthTexture;
renderer?.render(atTime: 0, viewport: CGRect(x: 0, y: 0, width: screenSize.width, height: screenSize.height), commandBuffer: commandBuffer!, passDescriptor: newDescriptor)
// Present the drawable when the command buffer has been executed (Metal
// calls to CoreAnimation to tell it to put the texture on the display when
// the rendering is complete)
commandBuffer?.present(drawable!)
// Commit the command buffer for execution as soon as possible
commandBuffer?.commit()

Related

Blur face in face detection in vision kit

I'm using Apple tutorial about face detection in vision kit in a live camera feed, not an image.
https://developer.apple.com/documentation/vision/tracking_the_user_s_face_in_real_time
It detects the face and adds some lines using CAShapeLayer to draw lines between different parts of the face.
fileprivate func setupVisionDrawingLayers() {
let captureDeviceResolution = self.captureDeviceResolution
let captureDeviceBounds = CGRect(x: 0,
y: 0,
width: captureDeviceResolution.width,
height: captureDeviceResolution.height)
let captureDeviceBoundsCenterPoint = CGPoint(x: captureDeviceBounds.midX,
y: captureDeviceBounds.midY)
let normalizedCenterPoint = CGPoint(x: 0.5, y: 0.5)
guard let rootLayer = self.rootLayer else {
self.presentErrorAlert(message: "view was not property initialized")
return
}
let overlayLayer = CALayer()
overlayLayer.name = "DetectionOverlay"
overlayLayer.masksToBounds = true
overlayLayer.anchorPoint = normalizedCenterPoint
overlayLayer.bounds = captureDeviceBounds
overlayLayer.position = CGPoint(x: rootLayer.bounds.midX, y: rootLayer.bounds.midY)
let faceRectangleShapeLayer = CAShapeLayer()
faceRectangleShapeLayer.name = "RectangleOutlineLayer"
faceRectangleShapeLayer.bounds = captureDeviceBounds
faceRectangleShapeLayer.anchorPoint = normalizedCenterPoint
faceRectangleShapeLayer.position = captureDeviceBoundsCenterPoint
faceRectangleShapeLayer.fillColor = nil
faceRectangleShapeLayer.strokeColor = UIColor.green.withAlphaComponent(0.7).cgColor
faceRectangleShapeLayer.lineWidth = 5
faceRectangleShapeLayer.shadowOpacity = 0.7
faceRectangleShapeLayer.shadowRadius = 5
let faceLandmarksShapeLayer = CAShapeLayer()
faceLandmarksShapeLayer.name = "FaceLandmarksLayer"
faceLandmarksShapeLayer.bounds = captureDeviceBounds
faceLandmarksShapeLayer.anchorPoint = normalizedCenterPoint
faceLandmarksShapeLayer.position = captureDeviceBoundsCenterPoint
faceLandmarksShapeLayer.fillColor = nil
faceLandmarksShapeLayer.strokeColor = UIColor.yellow.withAlphaComponent(0.7).cgColor
faceLandmarksShapeLayer.lineWidth = 3
faceLandmarksShapeLayer.shadowOpacity = 0.7
faceLandmarksShapeLayer.shadowRadius = 5
overlayLayer.addSublayer(faceRectangleShapeLayer)
faceRectangleShapeLayer.addSublayer(faceLandmarksShapeLayer)
rootLayer.addSublayer(overlayLayer)
self.detectionOverlayLayer = overlayLayer
self.detectedFaceRectangleShapeLayer = faceRectangleShapeLayer
self.detectedFaceLandmarksShapeLayer = faceLandmarksShapeLayer
self.updateLayerGeometry()
}
How can I fill inside the lines (different part of face) with a blurry view? I need to blur the face.
You could try placing a UIVisualEffectView on top of your video feed, and then adding a masking CAShapeLayer to that UIVisualEffectView. I don't know if that would work or not.
The docs on UIVisualEffectView say:
When using the UIVisualEffectView class, avoid alpha values that are less than 1. Creating views that are partially transparent causes the system to combine the view and all the associated subviews during an offscreen render pass. UIVisualEffectView objects need to be combined as part of the content they are layered on top of in order to look correct. Setting the alpha to less than 1 on the visual effect view or any of its superviews causes many effects to look incorrect or not show up at all.
I don't know if using a mask layer on a visual effect view would cause the same rendering problems or not. You'd have to try it. (And be sure to try it on a range of different hardware, since the rendering performance varies quite a bit between different versions of Apple's chipsets.)
You could also try using a shape layer filled with visual hash or a "pixellated" pattern instead of blurring. That would be faster and probably render more reliably.
Note that face detection tends to be a little jumpy. It might drop out for a few frames, or lag on quick pans or change of scene. If you're trying to hide people's faces in a live feed for privacy, it might not be reliable. It would only take a few un-blurred frames for somebody's identity to be revealed.

RealityKit .nonAR installGestures is missing translation and rotation is y axis only

I'm trying to reverse engineer the 3d Scanner App using RealityKit and am having real trouble getting just a basic model working with all gestures. When I run the code below, I get a cube with scale and rotation (about the y axis only), but no translation interaction. I'm trying to figure out how to get rotation about an arbitray axis as well as translation, like in the 3d Scanner App above. I'm relatively new to iOS and read one should use RealityKit as Apple isn't really supporting SceneKit anymore, but am now wondering if SceneKit would be the way to go, as RealityKit is still young. Or if anyone knows of an extension to RealityKit ModelEntity objects to give them better interaction capabilities.
I've got my app taking a scan with the LiDAR sensor and saving it to disk as a .usda mesh, per this tutorial, but when I load the mesh as a ModelEntity and attach gestures to it, I don't get any interaction at all.
The below example code recreates the limited gestures for a box ModelEntity, and I have some commented lines showing where I would load my .usda model from disk, but again while it will render, it gets no interaction with gestures.
Any help appreciated!
// ViewController.swift
import UIKit
import RealityKit
class ViewController: UIViewController {
var arView: ARView!
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view.
arView = ARView(frame: view.frame, cameraMode: .nonAR, automaticallyConfigureSession: false)
view.addSubview(arView)
// create pointlight
let pointLight = PointLight()
pointLight.light.intensity = 10000
// create light anchor
let lightAnchor = AnchorEntity(world: [0, 0, 0])
lightAnchor.addChild(pointLight)
arView.scene.addAnchor(lightAnchor)
// eventually want to load my model from disk and give it gestures.
// guard let scanEntity = try? Entity.loadModel(contentsOf: urlOBJ) else {
// print("couldn't load scan in this format")
// return
// }
// entity to add gestures to
let cubeMaterial = SimpleMaterial(color: .blue, isMetallic: true)
let myEntity = ModelEntity(mesh: .generateBox(width: 0.1, height: 0.2, depth: 0.3, cornerRadius: 0.01, splitFaces: false), materials: [cubeMaterial])
myEntity.generateCollisionShapes(recursive: false)
let myAnchor = AnchorEntity(world: .zero)
myAnchor.addChild(myEntity)
// add collision and interaction
let scanEntityBounds = myEntity.visualBounds(relativeTo: myAnchor)
myEntity.collision = CollisionComponent(shapes: [.generateBox(size: scanEntityBounds.extents).offsetBy(translation: scanEntityBounds.center)])
arView.installGestures(for: myEntity).forEach {
gestureRecognizer in
gestureRecognizer.addTarget(self, action: #selector(handleGesture(_:)))
}
arView.scene.addAnchor(myAnchor)
// without this, get no gestures at all
let camera = PerspectiveCamera()
let cameraAnchor = AnchorEntity(world: [0, 0, 0.2])
cameraAnchor.addChild(camera)
arView.scene.addAnchor(cameraAnchor)
}
#objc private func handleGesture(_ recognizer: UIGestureRecognizer) {
if recognizer is EntityTranslationGestureRecognizer {
print("translation!")
} else if recognizer is EntityScaleGestureRecognizer {
print("scale!")
} else if recognizer is EntityRotationGestureRecognizer {
print("rotation!")
}
}
}
To extend ModelEntity's gesture interaction capabilities setup your own 2D gestures. There are 8 screen gestures in UIKit, and in SwiftUI you have 5 principal gestures and additionally Sequence, Simultaneous and Exclusive variations.
Form what I have understood, that the gestures are working for the box but not for your .usdz file/model. If this is the case, then the issue is because the model does not have a collision mesh(HasCollsion). If you are using reality composer to edit your models, you could do the following:
click on the model
under the Physics dropdown, click Participate
under collision shape select automatic
Overalls, make sure that the model has collision and you cast within the code that it has collision
let myEntity = try? Entity.loadModel(named: "fileName") as! HasCollision

Merge CAShapeLayer into CVPixelBuffer

I'm capturing the output of a playing video using AVPlayerItemVideoOutput.copyPixelBuffer
I'm able to convert the pixel buffer into a CIImage, then render it back into a pixel buffer again, and then an AVAssetWriter writes the buffer stream out to a new movie clip successfully.
The reason I'm converting to CIImage is I want to do some manipulation of each frame. (So far I don't understand how to manipulate pixel buffers directly).
In this case I want to overlay a "scribble" style drawing that the user does with their finger. While the video plays, they can draw over it. I'm capturing this drawing successfully into a CAShapeLayer.
The code below outputs just the overlay CAShapeLayer successfully. When I try to reincorporate the original frame by uncommenting the lines shown, the entire process bogs down drastically and drops from 60fps to an unstable 10fps or so on an iPhone 12. I get stable 60fps in all cases except when I uncomment that code.
What's the best way to incorporate the shape layer into this stream of pixel buffers in 60fps "real time"?
Note: some of this code is not finalized -- setting bounds correctly, etc. However this is not related to my question and I'm aware that has to be done. The rotation/translation are there to orient the shape layer -- this all works for now.
func addShapesToBuffer(buffer: CVPixelBuffer, shapeLayer: CAShapeLayer) -> CVPixelBuffer? {
let coreImage = CoreImage.CIImage.init(cvImageBuffer: buffer)
let newBuffer = getBuffer(from: coreImage)
CVPixelBufferLockBaseAddress(newBuffer!, [])
let rect = CGRect(origin: CGPoint.zero, size: CGSize(width: 800, height: 390))
shapeLayer.shouldRasterize = true
shapeLayer.rasterizationScale = UIScreen.main.scale
shapeLayer.backgroundColor = UIColor.clear.cgColor
let renderer = UIGraphicsImageRenderer(size: rect.size)
let uiImageDrawing = renderer.image {
context in
// let videoImage = UIImage(ciImage: coreImage)
// videoImage.draw(in: rect)
let cgContext = context.cgContext
cgContext.rotate(by: deg2rad(-90))
cgContext.translateBy(x: -390, y: 0)
return shapeLayer.render(in: cgContext)
}
let ciContext = CIContext()
let newImage = CIImage(cgImage: uiImageDrawing.cgImage!)
ciContext.render(_: newImage, to: newBuffer!)
CVPixelBufferUnlockBaseAddress(newBuffer!, [])
return newBuffer
}

ARKit – How to display the feed from a virtual SCNCamera placed on SCNPlane?

I put some objects in AR space using ARKit and SceneKit. That works well. Now I'd like to add an additional camera (SCNCamera) that is placed elsewhere in the scene attached and positioned by a common SCNNode. It is oriented to show me the current scene from an other (fixed) perspective.
Now I'd like to show this additional SCNCamera feed on i.Ex. a SCNPlane (as the diffuse first material) - Like a TV screen. Of course I am aware that it will only display the SceneKit content which stays in the camera focus and not rest of the ARKit image (which is only possible by the main camera of course). A simple colored background then would be fine.
I have seen tutorials that describes, how to play a video file on a virtual display in ARSpace, but I need a realtime camera feed from my own current scene.
I defined this objects:
let camera = SCNCamera()
let cameraNode = SCNNode()
Then in viewDidLoad I do this:
camera.usesOrthographicProjection = true
camera.orthographicScale = 9
camera.zNear = 0
camera.zFar = 100
cameraNode.camera = camera
sceneView.scene.rootNode.addChildNode(cameraNode)
Then I call my setup function to place the virtual Display next to all my AR stuff, position the cameraNode as well (pointing in the direction where objects stay in the scene)
cameraNode.position = SCNVector3(initialStartPosition.x, initialStartPosition.y + 0.5, initialStartPosition.z)
let cameraPlane = SCNNode(geometry: SCNPlane(width: 0.5, height: 0.3))
cameraPlane.geometry?.firstMaterial?.diffuse.contents = cameraNode.camera
cameraPlane.position = SCNVector3(initialStartPosition.x - 1.0, initialStartPosition.y + 0.5, initialStartPosition.z)
sceneView.scene.rootNode.addChildNode(cameraPlane)
Everything compiles and loads... The display shows up at the given position, but it stays entirely gray. Nothing is displayed at all from the SCNCamera I put in the scene. Everything else in the AR scene works well, I just don't get any feed from that camera.
Hay anyone an approach to get this scenario working?
To even better visualize, I add some more print screens.
The following shows the Image trough the SCNCamera according to ARGeo's input. But it takes the whole screen, instead of displaying its contents on a SCNPlane, like I need.
The next Print screen actually shows the current ARView result as I got it using my posted code. As you can see, the gray Display-Plane remains gray - it shows nothing.
The last print screen is a photomontage, showing the expected result, as I'd like to get.
How could this be realized? Am I missing something fundamental here?
After some research and sleep, I came to the following, working solution (including some inexplainable obstacles):
Currently, the additional SCNCamera feed is not linked to a SCNMaterial on a SCNPlane, as it was the initial idea, but I will use an additional SCNView (for the moment)
In the definitions I add an other view like so:
let overlayView = SCNView() // (also tested with ARSCNView(), no difference)
let camera = SCNCamera()
let cameraNode = SCNNode()
then, in viewDidLoad, I setup the stuff like so...
camera.automaticallyAdjustsZRange = true
camera.usesOrthographicProjection = false
cameraNode.camera = camera
cameraNode.camera?.focalLength = 50
sceneView.scene.rootNode.addChildNode(cameraNode) // add the node to the default scene
overlayView.scene = scene // the same scene as sceneView
overlayView.allowsCameraControl = false
overlayView.isUserInteractionEnabled = false
overlayView.pointOfView = cameraNode // this links the new SCNView to the created SCNCamera
self.view.addSubview(overlayView) // don't forget to add as subview
// Size and place the view on the bottom
overlayView.frame = CGRect(x: 0, y: 0, width: self.view.bounds.width * 0.8, height: self.view.bounds.height * 0.25)
overlayView.center = CGPoint(x: self.view.bounds.width * 0.5, y: self.view.bounds.height - 175)
then, in some other function, I place the node containing the SCNCamera to my desired position and angle.
// (exemplary)
cameraNode.position = initialStartPosition + SCNVector3(x: -0.5, y: 0.5, z: -(Float(shiftCurrentDistance * 2.0 - 2.0)))
cameraNode.eulerAngles = SCNVector3(-15.0.degreesToRadians, -15.0.degreesToRadians, 0.0)
The result, is a kind of window (the new SCNView) at the bottom of the screen, displaying the same SceneKit content as in the main sceneView, viewed trough the perspective of the SCNCamera plus its node position, and that very nicely.
In a common iOS/Swift/ARKit project, this construct generates some side effects, that one may struggle into.
1) Mainly, the new SCNView shows SceneKit content from the desired perspective, but the background is always the actual physical camera feed. I could not figure out, how to make the background a static color, by still displaying all the SceneKit content. Changing the new scene's background property affects also the whole main scene, what is actually NOT desired.
2) It might sound confusing, but as soon as the following code get's included (which is essential to make it work):
overlayView.scene = scene
the animation speed of the entire scenes (both) DOUBLES! (Why?)
I got this corrected by adding/changing the following property, which restores the animation speed behavour almost like normal (default):
// add or change this in the scene setup
scene.physicsWorld.speed = 0.5
3) If there are actions like SCNAction.playAudio in the project, all the effects will no longer play - as long as I don't do this:
overlayView.scene = nil
Of course, the additional SCNView stops working but everything else gets gets back to its normal.
Use this code (as a starting point) to find out how to setup a virtual camera.
Just create a default ARKit project in Xcode and copy-paste my code:
import UIKit
import SceneKit
import ARKit
class ViewController: UIViewController, ARSCNViewDelegate {
#IBOutlet var sceneView: ARSCNView!
override func viewDidLoad() {
super.viewDidLoad()
sceneView.delegate = self
sceneView.showsStatistics = true
let scene = SCNScene(named: "art.scnassets/ship.scn")!
sceneView.scene = scene
let cameraNode = SCNNode()
cameraNode.camera = SCNCamera()
cameraNode.position = SCNVector3(0, 0, 1)
cameraNode.camera?.focalLength = 70
cameraNode.camera?.categoryBitMask = 1
scene.rootNode.addChildNode(cameraNode)
sceneView.pointOfView = cameraNode
sceneView.allowsCameraControl = true
sceneView.backgroundColor = UIColor.darkGray
let plane = SCNNode(geometry: SCNPlane(width: 0.8, height: 0.45))
plane.position = SCNVector3(0, 0, -1.5)
// ASSIGN A VIDEO STREAM FROM SCENEKIT-RECORDER TO YOUR MATERIAL
plane.geometry?.materials.first?.diffuse.contents = capturedVideoFromSceneKitRecorder
scene.rootNode.addChildNode(plane)
}
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
let configuration = ARWorldTrackingConfiguration()
sceneView.session.run(configuration)
}
}
UPDATED:
Here's a SceneKit Recorder App that you can tailor to your needs (you don't need to write a video to disk, just use a CVPixelBuffer stream and assign it as a texture for a diffuse material).
Hope this helps.
I'm a little late to the party, but I've had a similar issue recently.
As far as I can tell, you cannot directly connect a camera to a node's material. You can, however, use a scene's layer as a texture for a node.
The code below is not verified, but should be more or less ok:
class MyViewController: UIViewController {
override func loadView() {
let projectedScene = createProjectedScene()
let receivingScene = createReceivingScene()
let projectionPlane = receivingScene.scene?.rootNode.childNode(withName: "ProjectionPlane", recursively: true)!
// Here's the important part:
// You can't directly connect a camera to a material's diffuse texture.
// But you can connect a scene's layer as a texture.
projectionPlane.geometry?.firstMaterial?.diffuse.contents = projectedScene.layer
projectedScene.layer.contentsScale = 1
// Note how we only need to connect the receiving view to the controller.
// The projected view is not directly connected as a subview,
// but updates in projectedScene will still be reflected in receivingScene.
self.view = receivingScene
}
func createProjectedScene() -> SCNView {
let view = SCNView()
// ... set up scene ...
return view
}
func createReceivingScene() -> SCNView {
let view = SCNView()
// ... set up scene ...
let projectionPlane = SCNNode(geometry: SCNPlane(width: 2, height: 2)
projectionPlane.name = "ProjectionPlane"
view.scene.rootNode.addChildNode(projectionPlane)
return view
}
}

How to use multisampling with an MTKView?

I'm trying to get multisampling working with MTKView. I have an MTKView with a delegate. I set the view's sampleCount property to 4. I create a pipeline state descriptor with the rasterSampleCount set to 4, and use that to make a render pipeline state that I use when rendering.
In the delegate's draw(in:) method, I create a render pass descriptor by getting the view's current render pass descriptor and setting the storeAction to multisampleResolve. I've also set tried storeAndMultisampleResolve to no avail.
I have created a resolve texture for the render pass descriptor, and it is the same width and height as the view and the same pixel format.
Given the above, I get a full red frame during rendering. I have used the metal debugger to look at the textures, and both the view's texture and the resolve texture have the correct rendering in them. I'm on an AMD machine where a fully red texture often indicates an uninitialized texture.
Is there anything I need to do to get the rendering to go to the screen?
Here's how I'm setting up the view, pipeline state, and resolve texture:
metalView = newMetalView
metalView.sampleCount = 4
metalView.clearColor = MTLClearColor(red: 0.0, green: 0.0, blue: 0.0, alpha: 0.0)
device = newMetalView.device!
let metalLibrary = device.makeDefaultLibrary()!
let vertexFunction = metalLibrary.makeFunction(name: "vertexShader")
let fragmentFunction = metalLibrary.makeFunction(name: "fragmentShader")
let pipelineStateDescriptor = MTLRenderPipelineDescriptor.init()
pipelineStateDescriptor.label = "Particle Renderer"
pipelineStateDescriptor.vertexFunction = vertexFunction
pipelineStateDescriptor.fragmentFunction = fragmentFunction
pipelineStateDescriptor.colorAttachments [ 0 ].pixelFormat = metalView.colorPixelFormat
pipelineStateDescriptor.rasterSampleCount = 4
do {
try pipelineState = device.makeRenderPipelineState(descriptor: pipelineStateDescriptor)
} catch {
NSLog("Unable to create pipeline state")
pipelineState = nil
}
let textureDescriptor = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: metalView.colorPixelFormat, width: Int(metalView.bounds.width), height: Int(metalView.bounds.height), mipmapped: false)
resolveTexture = device.makeTexture(descriptor: textureDescriptor)!
And here's how I'm drawing:
let commandBuffer = commandQueue.makeCommandBuffer()
commandBuffer?.label = "Partcle Command Buffer"
let renderPassDescriptor = metalView.currentRenderPassDescriptor
renderPassDescriptor?.colorAttachments[0].clearColor = MTLClearColorMake(0.0, 0.0, 0.0, 0.0)
renderPassDescriptor?.colorAttachments[0].loadAction = MTLLoadAction.clear
renderPassDescriptor?.colorAttachments[0].storeAction = MTLStoreAction.multisampleResolve
renderPassDescriptor?.colorAttachments[0].resolveTexture = resolveTexture
let renderEncoder = commandBuffer?.makeRenderCommandEncoder(descriptor: renderPassDescriptor!)
renderEncoder?.label = "Particle Render Encoder"
renderEncoder?.setViewport(MTLViewport(originX: 0.0, originY: 0.0, width: Double(viewportSize.x), height: Double(viewportSize.y), znear: -1.0, zfar: 1.0))
renderEncoder?.setRenderPipelineState(pipelineState!);
Then I make my draw calls, and then finish up by calling:
renderEncoder?.endEncoding()
commandBuffer?.present(metalView.currentDrawable!)
commandBuffer?.commit()
Here's what the debugger shows is in my textures:
Oddly, while doing that debugging, I accidentally hid Xcode, and for 1 frame, the view showed the correct texture.
What's the initial configuration of renderPassDescriptor (as returned from metalView.currentRenderPassDescriptor?
I believe you want the color attachment's texture set to metalView.multisampleColorTexture and its resolveTexture set to metalView.currentDrawable.texture. That is, it should do the primary, multi-sampled rendering to the multi-sample texture and then that gets resolved to the drawable texture to actually draw it in the view.
I don't know if MTKView sets up its currentRenderPassDescriptor like that automatically when there's a sampleCount > 1. Ideally, it would.