Chunk Rendering in Metal - swift

I'm trying to create a procedural game using Metal, and I'm using an octree based chunk approach for a Level of Detail implementation.
The method I'm using involves the CPU creating the octree nodes for the terrain, which then has its mesh created on the GPU using a compute shader. This mesh is stored in a vertex buffer and index buffer in the chunk object for rendering.
All of this seems to work fairly well, however when it comes to rendering chunks I'm hitting performance issues early on. Currently I gather an array of chunks to draw, then submit that to my renderer, that will create an MTLParallelRenderCommandEncoder to then create an MTLRenderCommandEncoder for each chunk, which is then submitted to the GPU.
By the looks of it around 50% of the CPU time is spent on creating the MTLRenderCommandEncoder for each chunk. Currently I'm just creating a simple 8 vertex cube mesh for each chunk, and I have an 4x4x4 array of chunks and I'm dropping to around 50fps in these early stages. (In reality it seems that there can only be up to 63 MTLRenderCommandEncoder in each MTLParallelRenderCommandEncoder so it's not fully 4x4x4)
I've read that the point of the MTLParallelRenderCommandEncoder is to create each MTLRenderCommandEncoder in a separate thread, yet I've not had much luck with getting this to work. Also multithreading it wouldn't get around the cap of 63 chunks being rendered as a max.
I feel that somehow consolidating the vertex and index buffers for each chunk into one or two larger buffers for submission would help, but I'm not sure how to do this without copious memcpy() calls and whether or not this would even improve efficiency.
Here's my code that takes in the array of nodes and draws them:
func drawNodes(nodes: [OctreeNode], inView view: AHMetalView){
// For control of several rotating buffers
dispatch_semaphore_wait(displaySemaphore, DISPATCH_TIME_FOREVER)
makeDepthTexture()
updateUniformsForView(view, duration: view.frameDuration)
let commandBuffer = commandQueue.commandBuffer()
let optDrawable = layer.nextDrawable()
guard let drawable = optDrawable else{
return
}
let passDescriptor = MTLRenderPassDescriptor()
passDescriptor.colorAttachments[0].texture = drawable.texture
passDescriptor.colorAttachments[0].clearColor = MTLClearColorMake(0.2, 0.2, 0.2, 1)
passDescriptor.colorAttachments[0].storeAction = .Store
passDescriptor.colorAttachments[0].loadAction = .Clear
passDescriptor.depthAttachment.texture = depthTexture
passDescriptor.depthAttachment.clearDepth = 1
passDescriptor.depthAttachment.loadAction = .Clear
passDescriptor.depthAttachment.storeAction = .Store
let parallelRenderPass = commandBuffer.parallelRenderCommandEncoderWithDescriptor(passDescriptor)
// Currently 63 nodes as a maximum
for node in nodes{
// This line is taking up around 50% of the CPU time
let renderPass = parallelRenderPass.renderCommandEncoder()
renderPass.setRenderPipelineState(renderPipelineState)
renderPass.setDepthStencilState(depthStencilState)
renderPass.setFrontFacingWinding(.CounterClockwise)
renderPass.setCullMode(.Back)
let uniformBufferOffset = sizeof(AHUniforms) * uniformBufferIndex
renderPass.setVertexBuffer(node.vertexBuffer, offset: 0, atIndex: 0)
renderPass.setVertexBuffer(uniformBuffer, offset: uniformBufferOffset, atIndex: 1)
renderPass.setTriangleFillMode(.Lines)
renderPass.drawIndexedPrimitives(.Triangle, indexCount: AHMaxIndicesPerChunk, indexType: AHIndexType, indexBuffer: node.indexBuffer, indexBufferOffset: 0)
renderPass.endEncoding()
}
parallelRenderPass.endEncoding()
commandBuffer.presentDrawable(drawable)
commandBuffer.addCompletedHandler { (commandBuffer) -> Void in
self.uniformBufferIndex = (self.uniformBufferIndex + 1) % AHInFlightBufferCount
dispatch_semaphore_signal(self.displaySemaphore)
}
commandBuffer.commit()
}

You note:
I've read that the point of the MTLParallelRenderCommandEncoder is to create each MTLRenderCommandEncoder in a separate thread...
And you're correct. What you're doing is sequentially creating, encoding with, and ending command encoders — there's nothing parallel going on here, so MTLParallelRenderCommandEncoder is doing nothing for you. You'd have roughly the same performance if you eliminated the parallel encoder and just created encoders with renderCommandEncoderWithDescriptor(_:) on each pass through your for loop... which is to say, you'd still have the same performance problem due to the overhead of creating all those encoders.
So, if you're going to encode sequentially, just reuse the same encoder. Also, you should reuse as much of your other shared state as possible. Here's a quick pass at a possible refactoring (untested):
let passDescriptor = MTLRenderPassDescriptor()
// call this once before your render loop
func setup() {
makeDepthTexture()
passDescriptor.colorAttachments[0].clearColor = MTLClearColorMake(0.2, 0.2, 0.2, 1)
passDescriptor.colorAttachments[0].storeAction = .Store
passDescriptor.colorAttachments[0].loadAction = .Clear
passDescriptor.depthAttachment.texture = depthTexture
passDescriptor.depthAttachment.clearDepth = 1
passDescriptor.depthAttachment.loadAction = .Clear
passDescriptor.depthAttachment.storeAction = .Store
// set up render pipeline state and depthStencil state
}
func drawNodes(nodes: [OctreeNode], inView view: AHMetalView) {
updateUniformsForView(view, duration: view.frameDuration)
// Set up completed handler ahead of time
let commandBuffer = commandQueue.commandBuffer()
commandBuffer.addCompletedHandler { _ in // unused parameter
self.uniformBufferIndex = (self.uniformBufferIndex + 1) % AHInFlightBufferCount
dispatch_semaphore_signal(self.displaySemaphore)
}
// Semaphore should be tied to drawable acquisition
dispatch_semaphore_wait(displaySemaphore, DISPATCH_TIME_FOREVER)
guard let drawable = layer.nextDrawable()
else { return }
// Set up the one part of the pass descriptor that changes per-frame
passDescriptor.colorAttachments[0].texture = drawable.texture
// Get one render pass descriptor and reuse it
let renderPass = commandBuffer.renderCommandEncoderWithDescriptor(passDescriptor)
renderPass.setTriangleFillMode(.Lines)
renderPass.setRenderPipelineState(renderPipelineState)
renderPass.setDepthStencilState(depthStencilState)
for node in nodes {
// Update offsets and draw
let uniformBufferOffset = sizeof(AHUniforms) * uniformBufferIndex
renderPass.setVertexBuffer(node.vertexBuffer, offset: 0, atIndex: 0)
renderPass.setVertexBuffer(uniformBuffer, offset: uniformBufferOffset, atIndex: 1)
renderPass.drawIndexedPrimitives(.Triangle, indexCount: AHMaxIndicesPerChunk, indexType: AHIndexType, indexBuffer: node.indexBuffer, indexBufferOffset: 0)
}
renderPass.endEncoding()
commandBuffer.presentDrawable(drawable)
commandBuffer.commit()
}
Then, profile with Instruments to see what, if any, further performance issues you might have. There's a great WWDC 2015 session about that showing several of the common "gotchas", how to diagnose them in profiling, and how to fix them.

Related

How do I get SK sequences to run in swift

I found this pretty useful code online but I'm having trouble getting it to run. The variable names are all correct and I've used print statements to make sure it does make it to this function. It just doesn't seem to run the sequence on the Label nodes. Thanks
func fadeOutInfoText(){
infoLabel1.removeAllActions()
infoLabel2.removeAllActions()
speechIcon.removeAllActions()
let wait:SKAction = SKAction.wait(forDuration: 0.5)
let fade:SKAction = SKAction.fadeAlpha(to: 0, duration: 0.5)
let run:SKAction = SKAction.run {
self.infoLabel1.text = ""
self.infoLabel2.text = ""
self.infoLabel1.alpha = 1
self.infoLabel2.alpha = 1
self.speechIcon.alpha = 1
self.speechIcon.isHidden = true
}
let seq:SKAction = SKAction.sequence([wait,fade,run])
let seq2:SKAction = SKAction.sequence([wait,fade])
infoLabel1.run(seq)
infoLabel2.run(seq2)
speechIcon.run(seq2)
}
NOTE: This would be a comment (not enough reputation to do so yet :)
Executing the above code line-for-line (and adding in the nodes in an empty scene) gives what appears to be the desired result. Presumably you are not calling this function from the scene's update(_:) method, for this keeps the labels and speech icon from doing anything since the actions are removed before the scene performs the actions (see here). Make sure you aren't removing all actions and changing the alpha of the labels before this set of actions can complete elsewhere.
This is a sample code for Sequence.
let sprite = SKSpriteNode(imageNamed:"Spaceship")
let scale = SKAction.scale(to: 0.1, duration: 0.5)
let fade = SKAction.fadeOut(withDuration: 0.5)
let sequence = SKAction.sequence([scale, fade])
sprite.run(sequence)
Let me know it is useful or not.

How to draw text in iOS swift using Metal Framework

I am working on a Real-Time Data Monitoring app. I have successfully drawn waves using the Metal framework but I am facing problems while drawing simple text/strings. Like how to print "Hello" in MTKView. Here I am updating the vertices using a timer and then calling draw() to perform drawing. Only GPU rendering is required.
func draw(in view: MTKView) {
// print("calling")
// guard let drawablelayer = metalLayer!.nextDrawable(),
guard //let mainDrawable = view.currentDrawable,
// let _pipeLineState = self.pipelineState,
let discriptor = view.currentRenderPassDescriptor else {
return
}
let commandBuffer = commandQue.makeCommandBuffer()
let commandEncoder = commandBuffer?.makeRenderCommandEncoder(descriptor: discriptor)
//commandEncoder?.setRenderPipelineState(_pipeLineState)
viewPort = MTLViewport.init(originX: 0.0, originY: 0.0, width: 750, height: 1334, znear: 0.0, zfar: 0.0)
commandEncoder?.setViewport(viewPort!)
commandEncoder?.setVertexBuffer(layoutBuffer, offset: 0, index: 0)
commandEncoder?.setRenderPipelineState(noninterleavedRenderPipeline)
commandEncoder?.drawPrimitives(type: .triangle, vertexStart:0, vertexCount: verticesLayout.count)
commandEncoder?.setVertexBuffer(vertexBuffer, offset: 0, index: 0)
commandEncoder?.setRenderPipelineState(interleavedRenderPipeline)
commandEncoder?.drawPrimitives(type: .line, vertexStart:0, vertexCount: vertices.count)
commandEncoder?.setVertexBuffer(topBuffer, offset: 0, index: 0)
commandEncoder?.setRenderPipelineState(topInterleavedRenderPipeline)
commandEncoder?.drawPrimitives(type: .triangle, vertexStart:0, vertexCount: verticesRect.count)
commandEncoder?.endEncoding()
// commandBuffer?.present(mainDrawable)
if let drawable = view.currentDrawable {
commandBuffer?.present(drawable)
}
commandBuffer?.commit()
}
There are a number of libraries around, most of which use information directly from fonts to generate lines, Bezier curves, and other primitives that are "procedural" rather than representational (the latter being like a bitmap), and then tessellating them for presentation using Metal primitives.
Warren Moore has a 2018 piece (with code) that describes such an approach using libtess, an established tessilation library. I have not used this version but have played with earlier code that Moore has posted, with some success. You can find it at: https://metalbyexample.com/text-3d/
The article describes 3D text rendering (really artsy stuff) but also has links to other sites that describe simpler 2D solutions.
Warren posted an older 2D Objective-C solution on GitHub at: https://github.com/metal-by-example/sample-code/tree/master/objc/07-Mipmapping/Mipmapping.
It's not for the squeamish, but given the associated code I think you should be able to make progress.
What might strike you as a bit less intimidating, though the results are a bit grainier, is to use Metal Kit to import a Core Graphics texture into which you have written some text. See: https://developer.apple.com/documentation/metalkit/mtktextureloader

MTKView compositing one UIImage at a time

I have an array of UIImages that I want to composite via an MTKView using a variety of specific comp modes (source-over, erase, etc). With the approach I describe below, I find that the biggest overhead seems to be in converting each UIImage into an MTLTexture that I can use to populate an MTKView's currentDrawable buffer.
The drawing loop looks like this:
for strokeDataCurrent in strokeDataArray {
let strokeImage = UIImage(data: strokeDataCurrent.image) // brushstroke
let strokeBbox = strokeDataCurrent.bbox // brush bounding box
let strokeType = strokeDataCurrent.strokeType // used to define comp mode
// convert strokeImage to a MTLTexture and composite
drawStrokeImage(paintingViewMetal: self.canvasMetalViewPainting, strokeImage: strokeImage!, strokeBbox: strokeBbox, strokeType: strokeType)
} // end of for strokeDataCurrent in strokeDataArray
Inside of drawStrokeImage, I convert each stroke to an MTLTexture like this:
guard let stampTexture = device!.makeTexture(descriptor: texDescriptor) else { return }
let destCGImage = strokeImage.cgImage!
let dstData: CFData = (destCGImage!.dataProvider!.data)!
let pixelData = CFDataGetBytePtr(dstData)
let region = MTLRegionMake2D(0, 0, Int(width), Int(height))
stampTexture.replace(region: region, mipmapLevel: 0, withBytes: pixelData!, bytesPerRow: Int(rowBytes))
with all this in place, I define a vertex buffer, set a commandEncoder:
defineCommandEncoder(renderCommandEncoder: renderCommandEncoder, vertexArrayStamps: vertexArrayStamps, metalTexture: stampTexture)
and call setNeedsDisplay() to render. This is happening for each stroke in the above for loop.
While I get ok performance in this approach, I'm wondering if I can squeeze more performance somewhere along the way? Like I said, I think the current bottleneck is in going from CGImage -> MTLTexture.
Note that I am rendering to a defined MTLTexture metalDrawableTextureComposite which I am blitting to the currentDrawable for each stroke:
copyTexture(buffer: commandBuffer!, from: metalDrawableTextureComposite, to: self.currentDrawable!.texture)
Hopefully this is enough detail to provide context for my question. Also, if anyone has ideas for other (GPU/Metal-based hopefully) faster compositing approaches that would be awesome. Any thoughts would be appreciated.

What can cause lag in recurrent calls to the draw() function of a MetalKit MTKView

I am designing a Cocoa application using the swift 4.0 MetalKit API for macOS 10.13. Everything I report here was done on my 2015 MBPro.
I have successfully implemented an MTKView which renders simple geometry with low vertex count very well (Cubes, triangles, etc.). I implemented a mouse-drag based camera which rotates, strafes and magnifies. Here is a screenshot of the xcode FPS debug screen while I rotate the cube:
However, when I try loading a dataset which contains only ~1500 vertices (which are each stored as 7 x 32bit Floats... ie: 42 kB total), I start getting a very bad lag in FPS. I will show the code implementation lower. Here is a screenshot (note that on this image, the view only encompasses a few of the vertices, which are rendered as large points) :
Here is my implementation:
1) viewDidLoad() :
override func viewDidLoad() {
super.viewDidLoad()
// Initialization of the projection matrix and camera
self.projectionMatrix = float4x4.makePerspectiveViewAngle(float4x4.degrees(toRad: 85.0),
aspectRatio: Float(self.view.bounds.size.width / self.view.bounds.size.height),
nearZ: 0.01, farZ: 100.0)
self.vCam = ViewCamera()
// Initialization of the MTLDevice
metalView.device = MTLCreateSystemDefaultDevice()
device = metalView.device
metalView.colorPixelFormat = .bgra8Unorm
// Initialization of the shader library
let defaultLibrary = device.makeDefaultLibrary()!
let fragmentProgram = defaultLibrary.makeFunction(name: "basic_fragment")
let vertexProgram = defaultLibrary.makeFunction(name: "basic_vertex")
// Initialization of the MTLRenderPipelineState
let pipelineStateDescriptor = MTLRenderPipelineDescriptor()
pipelineStateDescriptor.vertexFunction = vertexProgram
pipelineStateDescriptor.fragmentFunction = fragmentProgram
pipelineStateDescriptor.colorAttachments[0].pixelFormat = .bgra8Unorm
pipelineState = try! device.makeRenderPipelineState(descriptor: pipelineStateDescriptor)
// Initialization of the MTLCommandQueue
commandQueue = device.makeCommandQueue()
// Initialization of Delegates and BufferProvider for View and Projection matrix MTLBuffer
self.metalView.delegate = self
self.metalView.eventDelegate = self
self.bufferProvider = BufferProvider(device: device, inflightBuffersCount: 3, sizeOfUniformsBuffer: MemoryLayout<Float>.size * float4x4.numberOfElements() * 2)
}
2) Loading of the MTLBuffer for the Cube vertices :
private func makeCubeVertexBuffer() {
let cube = Cube()
let vertices = cube.verticesArray
var vertexData = Array<Float>()
for vertex in vertices{
vertexData += vertex.floatBuffer()
}
VDataSize = vertexData.count * MemoryLayout.size(ofValue: vertexData[0])
self.vertexBuffer = device.makeBuffer(bytes: vertexData, length: VDataSize!, options: [])!
self.vertexCount = vertices.count
}
3) Loading of the MTLBuffer for the dataset vertices. Note that I explicitly declare the storage mode of this buffer as Private in order to ensure efficient access to the data by the GPU since the CPU does not need to access the data once the buffer is loaded. Also, note that I am loading only 1/100th of the vertices in my actual dataset because the entire OS on my machine starts lagging when I try to load it entirely (only 4.2 MB of data).
public func loadDataset(datasetVolume: DatasetVolume) {
// Load dataset vertices
self.datasetVolume = datasetVolume
self.datasetVertexCount = self.datasetVolume!.vertexCount/100
let rgbaVertices = self.datasetVolume!.rgbaPixelVolume[0...(self.datasetVertexCount!-1)]
var vertexData = Array<Float>()
for vertex in rgbaVertices{
vertexData += vertex.floatBuffer()
}
let dataSize = vertexData.count * MemoryLayout.size(ofValue: vertexData[0])
// Make two MTLBuffer's: One with Shared storage mode in which data is initially loaded, and a second one with Private storage mode
self.datasetVertexBuffer = device.makeBuffer(bytes: vertexData, length: dataSize, options: MTLResourceOptions.storageModeShared)
self.datasetVertexBufferGPU = device.makeBuffer(length: dataSize, options: MTLResourceOptions.storageModePrivate)
// Create a MTLCommandBuffer and blit the vertex data from the Shared MTLBuffer to the Private MTLBuffer
let commandBuffer = self.commandQueue.makeCommandBuffer()
let blitEncoder = commandBuffer!.makeBlitCommandEncoder()
blitEncoder!.copy(from: self.datasetVertexBuffer!, sourceOffset: 0, to: self.datasetVertexBufferGPU!, destinationOffset: 0, size: dataSize)
blitEncoder!.endEncoding()
commandBuffer!.commit()
// Clean up
self.datasetLoaded = true
self.datasetVertexBuffer = nil
}
4) Finally, here is the render loop. Again, this is using MetalKit.
func draw(in view: MTKView) {
render(view.currentDrawable)
}
private func render(_ drawable: CAMetalDrawable?) {
guard let drawable = drawable else { return }
// Make sure an MTLBuffer for the View and Projection matrices is available
_ = self.bufferProvider?.availableResourcesSemaphore.wait(timeout: DispatchTime.distantFuture)
// Initialize common RenderPassDescriptor
let renderPassDescriptor = MTLRenderPassDescriptor()
renderPassDescriptor.colorAttachments[0].texture = drawable.texture
renderPassDescriptor.colorAttachments[0].loadAction = .clear
renderPassDescriptor.colorAttachments[0].clearColor = Colors.White
renderPassDescriptor.colorAttachments[0].storeAction = .store
// Initialize a CommandBuffer and add a CompletedHandler to release an MTLBuffer from the BufferProvider once the GPU is done processing this command
let commandBuffer = self.commandQueue.makeCommandBuffer()
commandBuffer?.addCompletedHandler { (_) in
self.bufferProvider?.availableResourcesSemaphore.signal()
}
// Update the View matrix and obtain an MTLBuffer for it and the projection matrix
let camViewMatrix = self.vCam.getLookAtMatrix()
let uniformBuffer = bufferProvider?.nextUniformsBuffer(projectionMatrix: projectionMatrix, camViewMatrix: camViewMatrix)
// Initialize a MTLParallelRenderCommandEncoder
let parallelEncoder = commandBuffer?.makeParallelRenderCommandEncoder(descriptor: renderPassDescriptor)
// Create a CommandEncoder for the cube vertices if its data is loaded
if self.cubeLoaded == true {
let cubeRenderEncoder = parallelEncoder?.makeRenderCommandEncoder()
cubeRenderEncoder!.setCullMode(MTLCullMode.front)
cubeRenderEncoder!.setRenderPipelineState(pipelineState)
cubeRenderEncoder!.setTriangleFillMode(MTLTriangleFillMode.fill)
cubeRenderEncoder!.setVertexBuffer(self.cubeVertexBuffer, offset: 0, index: 0)
cubeRenderEncoder!.setVertexBuffer(uniformBuffer, offset: 0, index: 1)
cubeRenderEncoder!.drawPrimitives(type: .triangle, vertexStart: 0, vertexCount: vertexCount!, instanceCount: self.cubeVertexCount!/3)
cubeRenderEncoder!.endEncoding()
}
// Create a CommandEncoder for the dataset vertices if its data is loaded
if self.datasetLoaded == true {
let rgbaVolumeRenderEncoder = parallelEncoder?.makeRenderCommandEncoder()
rgbaVolumeRenderEncoder!.setRenderPipelineState(pipelineState)
rgbaVolumeRenderEncoder!.setVertexBuffer( self.datasetVertexBufferGPU!, offset: 0, index: 0)
rgbaVolumeRenderEncoder!.setVertexBuffer(uniformBuffer, offset: 0, index: 1)
rgbaVolumeRenderEncoder!.drawPrimitives(type: .point, vertexStart: 0, vertexCount: datasetVertexCount!, instanceCount: datasetVertexCount!)
rgbaVolumeRenderEncoder!.endEncoding()
}
// End CommandBuffer encoding and commit task
parallelEncoder!.endEncoding()
commandBuffer!.present(drawable)
commandBuffer!.commit()
}
Alright, so these are the steps I have been through in trying to figure out what was causing the lag, keeping in mind that the lagging effect is proportional to the size of the dataset's vertex buffer:
I initially though it was due to the GPU not being able to access the memory quickly enough because it was in Shared storage mode -> I changed the dataset MTLBuffer to Private storage mode. This did not solve the problem.
I then though that the problem was due to the CPU spending too much time in my render() function. This could possibly be due to a problem with the BufferProvider or maybe because somehow the CPU was trying to somehow reprocess/reload the dataset vertex buffer every frame -> In order to check this, I used the Time Profiler in xcode's Instruments. Unfortunately, it seems that the problem is that the application calls this render method (in other words, MTKView's draw() method) only very rarely. Here are some screenshots :
The spike at ~10 seconds is when the cube is loaded
The spikes between ~25-35 seconds are when the dataset is loaded
This image (^) shows the activity between ~10-20 seconds, right after the cube was loaded. This is when the FPS is at ~60. You can see that the main thread spends around 53ms in the render() function during these 10 seconds.
This image (^) shows the activity between ~40-50 seconds, right after the dataset was loaded. This is when the FPS is < 10. You can see that the main thread spends around 4ms in the render() function during these 10 seconds. As you can see, none of the methods which are usually called from within this function are called (ie: the ones we can see called when only the cube is loaded, previous image). Of note, when I load the dataset, the time profiler's timer starts to jump (ie: it stops for a few seconds and then jumps to the current time... repeat).
So this is where I am. The problem seems to be that the CPU somehow gets overloaded with these 42 kB of data... recursively. I also did a test with the Allocator in xcode's Instruments. No signs of memory leak, as far as I could tell (You might have noticed that a lot of this is new to me).
Sorry for the convoluted post, I hope it's not too hard to follow. Thank you all in advance for your help.
Edit:
Here are my shaders, in case you would like to see them:
struct VertexIn{
packed_float3 position;
packed_float4 color;
};
struct VertexOut{
float4 position [[position]];
float4 color;
float size [[point_size]];
};
struct Uniforms{
float4x4 cameraMatrix;
float4x4 projectionMatrix;
};
vertex VertexOut basic_vertex(const device VertexIn* vertex_array [[ buffer(0) ]],
constant Uniforms& uniforms [[ buffer(1) ]],
unsigned int vid [[ vertex_id ]]) {
float4x4 cam_Matrix = uniforms.cameraMatrix;
float4x4 proj_Matrix = uniforms.projectionMatrix;
VertexIn VertexIn = vertex_array[vid];
VertexOut VertexOut;
VertexOut.position = proj_Matrix * cam_Matrix * float4(VertexIn.position,1);
VertexOut.color = VertexIn.color;
VertexOut.size = 15;
return VertexOut;
}
fragment half4 basic_fragment(VertexOut interpolated [[stage_in]]) {
return half4(interpolated.color[0], interpolated.color[1], interpolated.color[2], interpolated.color[3]);
}
I think the main problem is that you're telling Metal to do instanced drawing when you shouldn't be. This line:
rgbaVolumeRenderEncoder!.drawPrimitives(type: .point, vertexStart: 0, vertexCount: datasetVertexCount!, instanceCount: datasetVertexCount!)
is telling Metal to draw datasetVertexCount! instances of each of datasetVertexCount! vertexes. The GPU work is growing with the square of the vertex count. Also, since you don't make use of the instance ID to, for example, tweak the vertex position, all of these instances are identical and thus redundant.
I think the same applies to this line:
cubeRenderEncoder!.drawPrimitives(type: .triangle, vertexStart: 0, vertexCount: vertexCount!, instanceCount: self.cubeVertexCount!/3)
although it's not clear what self.cubeVertexCount! is and whether it grows with vertexCount. In any case, since it seems you're using the same pipeline state and thus same shaders which don't make use of the instance ID, it's still useless and wasteful.
Other things:
Why are you using MTLParallelRenderCommandEncoder when you're not actually using the parallelism that it enables? Don't do that.
Everywhere you're using the size method of MemoryLayout, you should almost certainly be using stride instead. And if you're computing the stride of a compound data structure, do not take the stride of one element of that structure and multiply by the number of elements. Take the stride of the whole data structure.

How to get the advantages of Scenekit's level editor programatically

I've just ran a couple of tests comparing the performance of different ways of loading/creating a scene to see the performance impact. The test was simply rendering a 32x32 grid of cubes and eyeballing the CPU usage, memory, energy and rendering times. Not very scientific but there were some clear results. The four tests consisted of...
Load a .dae, e.g. SCNScene(named: "grid.dae")
Converting a .dae to .scn file in XCode and loadinf that
Building a grid in the Scnenekit editor manually using a reference node
Building a grid programatically using an SCNReference node (see code at bottom of question)
I expected 1 & 2 to be broadly the same and they were.
I expected test 3 to have much better performance than tests 1 & 2, and it did. The CPU load and energy usage was very low. It had half the memory foootprint and the rendering time was a fraction of the rendering times for test 1&2.
I was hoping test 4 would match test 3, but it didn't. It appeared to be the same or worse than tests 1&2.
// Code for test 4
let boxPath = Bundle.main.path(forResource: "box", ofType: "scn")
let boxUrl = URL(fileURLWithPath: boxPath!)
let offset: Int = 16
for xIndex:Int in 0...32 {
for yIndex:Int in 0...32 {
let boxReference = SCNReferenceNode(url: boxUrl)
scene.rootNode.addChildNode(boxReference!)
boxReference?.position.x = Float(xIndex - offset)
boxReference?.position.y = Float(yIndex - offset)
boxReference?.load()
}
}
Is the performance advantage that SceneKit's level editor provides available to developers and I'm just going about it wrong, or is Scenekit/XCode doing something bespoke under the hood?
UPDATE
In response to Confused's comment, I tried using the flattenedCone method on SCNNode. Here is a variation on the original code using that technique...
let boxPath = Bundle.main.path(forResource: "box", ofType: "scn")
let boxUrl = URL(fileURLWithPath: boxPath!)
let offset: Int = 16
let testNode = SCNNode()
for xIndex:Int in 0...32 {
for yIndex:Int in 0...32 {
let boxReference = SCNReferenceNode(url: boxUrl)
testNode.addChildNode(boxReference!)
boxReference?.position.x = Float(xIndex - offset)
boxReference?.position.y = Float(yIndex - offset)
boxReference?.load()
}
}
let optimizedNode = testNode.flattenedClone()
scene.rootNode.addChildNode(optimizedNode)