I would like to use a compute shader to modify my vertices before they are passed to the vertex shader. I can’t find any examples or explanations of this, except that it seems to be mentioned here: Metal emulate geometry shaders using compute shaders. This doesn’t help me as it doesn’t explain the CPU part of it.
I have seen many examples where a texture buffer is read and written to in a compute shader, but I need to read and modify the vertex buffer, which contains custom vertex structs with normals, and is created by a MDLMesh. I would be forever grateful for some sample code!
BACKGROUND
What I actually want to achieve is really to be able to modify the vertex normals on the GPU. The other option would be if I could access the entire triangle from the vertex shader, like in the linked answer. For some reason I can only access a single vertex, using the stage_in attribute. Using the entire buffer does not work for me in this particular case, this is probably related to using a mesh provided by Model I/O and MDLMesh. When I create the vertices manually I am able to access the vertex buffer array. Having said that, with that solution I would have to calculate the new vertex normal vector three time for each triangle which seems wasteful, and in any case I want to be able to apply compute shaders to the vertex buffer!
Thanks to Ken Thomases' comments, I managed to find a solution. He made me realise it is quite straightforward:
I'm using a vertex struct that looks like this:
// Metal side
struct Vertex {
float4 position;
float4 normal;
float4 color;
};
// Swift side
struct Vertex {
var position: float4
var normal: float4
var color: float4
}
During setup where I usually create a vertex buffer, index buffer and render pipeline state, I now also make a compute pipeline state:
// Vertex buffer
let dataSize = vertexData.count*MemoryLayout<Vertex>.stride
vertexBuffer = device.makeBuffer(bytes: vertexData, length: dataSize, options: [])!
// Index buffer
indexCount = indices.count
let indexSize = indexCount*MemoryLayout<UInt16>.stride
indexBuffer = device.makeBuffer(bytes: indices, length: indexSize, options: [])!
// Compute pipeline state
let adjustmentFunction = library.makeFunction(name: "adjustment_func")!
cps = try! device.makeComputePipelineState(function: adjustmentFunction)
// Render pipeline state
let rpld = MTLRenderPipelineDescriptor()
rpld.vertexFunction = library.makeFunction(name: "vertex_func")
rpld.fragmentFunction = library.makeFunction(name: "fragment_func")
rpld.colorAttachments[0].pixelFormat = .bgra8Unorm
rps = try! device.makeRenderPipelineState(descriptor: rpld)
commandQueue = device.makeCommandQueue()!
Then my render function looks like this:
let black = MTLClearColor(red: 0, green: 0, blue: 0, alpha: 1)
rpd.colorAttachments[0].texture = drawable.texture
rpd.colorAttachments[0].clearColor = black
rpd.colorAttachments[0].loadAction = .clear
let commandBuffer = commandQueue.makeCommandBuffer()!
let computeCommandEncoder = commandBuffer.makeComputeCommandEncoder()!
computeCommandEncoder.setComputePipelineState(cps)
computeCommandEncoder.setBuffer(vertexBuffer, offset: 0, index: 0)
computeCommandEncoder.dispatchThreadgroups(MTLSize(width: meshSize*meshSize, height: 1, depth: 1), threadsPerThreadgroup: MTLSize(width: 4, height: 1, depth: 1))
computeCommandEncoder.endEncoding()
let renderCommandEncoder = commandBuffer.makeRenderCommandEncoder(descriptor: rpd)!
renderCommandEncoder.setRenderPipelineState(rps)
renderCommandEncoder.setFrontFacing(.counterClockwise)
renderCommandEncoder.setCullMode(.back)
updateUniforms(aspect: Float(size.width/size.height))
renderCommandEncoder.setVertexBuffer(vertexBuffer, offset: 0, index: 0)
renderCommandEncoder.setVertexBuffer(uniformBuffer, offset: 0, index: 1)
renderCommandEncoder.setFragmentBuffer(uniformBuffer, offset: 0, index: 1)
renderCommandEncoder.drawIndexedPrimitives(type: .triangle, indexCount: indexCount, indexType: .uint16, indexBuffer: indexBuffer, indexBufferOffset: 0)
renderCommandEncoder.endEncoding()
commandBuffer.present(drawable)
commandBuffer.commit()
Finally my compute shader looks like this:
kernel void adjustment_func(const device Vertex *vertices [[buffer(0)]], uint2 gid [[thread_position_in_grid]]) {
vertices[gid.x].position = function(pos.xyz);
}
and this is the signature of my vertex function:
vertex VertexOut vertex_func(const device Vertex *vertices [[buffer(0)]], uint i [[vertex_id]], constant Uniforms &uniforms [[buffer(1)]])
Related
How does one extract the SceneKit depth buffer? I make an AR based app that is running Metal and I'm really struggling to find any info on how to extract a 2D depth buffer so I can render out fancy 3D photos of my scenes. Any help greatly appreciated.
Your question is unclear but I'll try to answer.
Depth pass from VR view
If you need to render a Depth pass from SceneKit's 3D environment then you should use, for instance, a SCNGeometrySource.Semantic structure. There are vertex, normal, texcoord, color and tangent type properties. Let's see what a vertex type property is:
static let vertex: SCNGeometrySource.Semantic
This semantic identifies data containing the positions of each vertex in the geometry. For a custom shader program, you use this semantic to bind SceneKit’s vertex position data to an input attribute of the shader. Vertex position data is typically an array of three- or four-component vectors.
Here's a code's excerpt from iOS Depth Sample project.
UPDATED: Using this code you can get a position for every point in SCNScene and assign a color for these points (this is what a zDepth channel really is):
import SceneKit
struct PointCloudVertex {
var x: Float, y: Float, z: Float
var r: Float, g: Float, b: Float
}
#objc class PointCloud: NSObject {
var pointCloud : [SCNVector3] = []
var colors: [UInt8] = []
public func pointCloudNode() -> SCNNode {
let points = self.pointCloud
var vertices = Array(repeating: PointCloudVertex(x: 0,
y: 0,
z: 0,
r: 0,
g: 0,
b: 0),
count: points.count)
for i in 0...(points.count-1) {
let p = points[i]
vertices[i].x = Float(p.x)
vertices[i].y = Float(p.y)
vertices[i].z = Float(p.z)
vertices[i].r = Float(colors[i * 4]) / 255.0
vertices[i].g = Float(colors[i * 4 + 1]) / 255.0
vertices[i].b = Float(colors[i * 4 + 2]) / 255.0
}
let node = buildNode(points: vertices)
return node
}
private func buildNode(points: [PointCloudVertex]) -> SCNNode {
let vertexData = NSData(
bytes: points,
length: MemoryLayout<PointCloudVertex>.size * points.count
)
let positionSource = SCNGeometrySource(
data: vertexData as Data,
semantic: SCNGeometrySource.Semantic.vertex,
vectorCount: points.count,
usesFloatComponents: true,
componentsPerVector: 3,
bytesPerComponent: MemoryLayout<Float>.size,
dataOffset: 0,
dataStride: MemoryLayout<PointCloudVertex>.size
)
let colorSource = SCNGeometrySource(
data: vertexData as Data,
semantic: SCNGeometrySource.Semantic.color,
vectorCount: points.count,
usesFloatComponents: true,
componentsPerVector: 3,
bytesPerComponent: MemoryLayout<Float>.size,
dataOffset: MemoryLayout<Float>.size * 3,
dataStride: MemoryLayout<PointCloudVertex>.size
)
let element = SCNGeometryElement(
data: nil,
primitiveType: .point,
primitiveCount: points.count,
bytesPerIndex: MemoryLayout<Int>.size
)
element.pointSize = 1
element.minimumPointScreenSpaceRadius = 1
element.maximumPointScreenSpaceRadius = 5
let pointsGeometry = SCNGeometry(sources: [positionSource, colorSource], elements: [element])
return SCNNode(geometry: pointsGeometry)
}
}
Depth pass from AR view
If you need to render a Depth pass from ARSCNView it is possible only in case you're using ARFaceTrackingConfiguration for the front-facing camera. If so, then you can employ capturedDepthData instance property that brings you a depth map, captured along with the video frame.
var capturedDepthData: AVDepthData? { get }
But this depth map image is only 15 fps and of lower resolution than corresponding RGB image at 60 fps.
Face-based AR uses the front-facing, depth-sensing camera on compatible devices. When running such a configuration, frames vended by the session contain a depth map captured by the depth camera in addition to the color pixel buffer (see capturedImage) captured by the color camera. This property’s value is always nil when running other AR configurations.
And a real code could be like this:
extension ViewController: ARSCNViewDelegate {
func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
DispatchQueue.global().async {
guard let frame = self.sceneView.session.currentFrame else {
return
}
if let depthImage = frame.capturedDepthData {
self.depthImage = (depthImage as! CVImageBuffer)
}
}
}
}
Depth pass from Video view
Also, you can extract a true Depth pass using 2 back-facing cameras and AVFoundation framework.
Look at Image Depth Map tutorial where Disparity concept will be introduced to you.
The obj file loaded with Model I/O. I receive vertex buffer and index buffer from mesh and submesh. I draw it with index buffer. In GPU buffer unpacked and load all triangles.
When I load asset I am creating vertex descriptor, to say what should be loaded from 3D asset. When I pass it to shader I am using [[ stage_in ]] vertex parameter.
But somehow I need to change the structure of vertex that is created by loading model asset in parameter of vertex-shader function to pass more data with animation offset for each vertex. For example I need to pass data that in an 3D asset, to apply offset to all vertices.
vertex VertexOut vertex_main(VertexIn vertexIn [[ stage_in ]], constant Uniforms &uniforms [[ buffer(1) ]], uint vertexID [[ vertex_id ]]) {
float4 worldPosition = uniforms.modelMatrix *float4(vertexIn.position, 1);
VertexOut vertexOut;
vertexOut.position = uniforms.viewProjectionMatrix * worldPosition;
vertexOut.worldPosition = worldPosition.xyz;
vertexOut.worldNormal = uniforms.normalMatrix * vertexIn.normal;
vertexOut.texCoords = vertexIn.texCoords;
return vertexOut;
}
That how looks VertexIn
struct VertexIn {
float3 position [[attribute(0)]];
float3 normal [[attribute(1)]];
float2 texCoords [[attribute(2)]];
};
Drawing of 3D asset
for mesh in meshes {
for i in 0..<mesh.vertexBuffers.count {
let vertexBuffer = mesh.vertexBuffers[i]
commandEncoder.setVertexBuffer(vertexBuffer.buffer, offset: vertexBuffer.offset, index: i)
}
for submesh in mesh.submeshes {
let indexBuffer = submesh.indexBuffer
commandEncoder.drawIndexedPrimitives(type: submesh.primitiveType,
indexCount: submesh.indexCount,
indexType: submesh.indexType,
indexBuffer: indexBuffer.buffer,
indexBufferOffset: indexBuffer.offset)
}
}
Vertex descriptor for loading 3D Model asset
let vertexDescriptor = MDLVertexDescriptor()
vertexDescriptor.attributes[0] = MDLVertexAttribute(name: MDLVertexAttributePosition, format: .float3, offset: 0, bufferIndex: 0)
vertexDescriptor.attributes[1] = MDLVertexAttribute(name: MDLVertexAttributeNormal, format: .float3, offset: MemoryLayout<Float>.size * 3, bufferIndex: 0)
vertexDescriptor.attributes[2] = MDLVertexAttribute(name: MDLVertexAttributeTextureCoordinate, format: .float2, offset: MemoryLayout<Float>.size * 6, bufferIndex: 0)
vertexDescriptor.layouts[0] = MDLVertexBufferLayout(stride: MemoryLayout<Float>.size * 8)
Thank you for help.
P.S.
I was trying to change after loading of 3d asset, but it fails, because I can't change the vertex buffer created with Model I/O.
I am trying to implement a metal-backed drawing application where brushstrokes are drawn on an MTKView by stamping a textured square repeatedly along a path. I am varying the stamp's color/transparency at the vertex level as the brushstroke is drawn so I can simulate ink effects such as color/transparency fading over time, etc. This seems to work ok when I am using a classic "over" type blending (which does not accumulate value over time), but when I use "additive" blending, vertex transparency is completely ignored (i.e. I only get texture transparency). Below are snippets of pertinent code:
First, my vertex program:
vertex VertexOut basic_vertex(const device VertexIn* vertex_array [[ buffer(0) ]], unsigned int vid [[ vertex_id ]]) {
VertexIn VertexIn = vertex_array[vid];
VertexOut VertexOut;
VertexOut.position = float4(VertexIn.position,1);
VertexOut.color = VertexIn.color;
VertexOut.texCoord = VertexIn.texCoord;
return VertexOut;
}
Next, my fragment program multiplies the stamp's texture (with alpha) by the vertex color (also with alpha) which is needed for gradual tinting or fading of each stamp across a brushstroke
fragment float4 basic_fragment(VertexOut interpolated [[stage_in]], texture2d<float> tex2D [[ texture(0) ]], sampler sampler2D [[ sampler(0) ]])
{
float4 color = interpolated.color * tex2D.sample(sampler2D, interpolated.texCoord); // texture multiplied by vertex color
return color;
}
Next, below are the blending definitions:
// 5a. Define render pipeline settings
let renderPipelineDescriptor = MTLRenderPipelineDescriptor()
renderPipelineDescriptor.vertexFunction = vertexProgram
renderPipelineDescriptor.sampleCount = self.sampleCount
renderPipelineDescriptor.colorAttachments[0].pixelFormat = self.colorPixelFormat
renderPipelineDescriptor.colorAttachments[0].isBlendingEnabled = true
renderPipelineDescriptor.colorAttachments[0].alphaBlendOperation = .add
// settings for additive blending
if drawColorBlendMode == colorBlendMode.compositeAdd {
renderPipelineDescriptor.colorAttachments[0].sourceRGBBlendFactor = .one
renderPipelineDescriptor.colorAttachments[0].destinationRGBBlendFactor = .one
renderPipelineDescriptor.colorAttachments[0].sourceAlphaBlendFactor = .one
renderPipelineDescriptor.colorAttachments[0].destinationAlphaBlendFactor = .one
}
// settings for classic 'over' blending
if drawColorBlendMode == colorBlendMode.compositeOver {
renderPipelineDescriptor.colorAttachments[0].sourceRGBBlendFactor = .sourceAlpha
renderPipelineDescriptor.colorAttachments[0].destinationRGBBlendFactor = .oneMinusSourceAlpha
renderPipelineDescriptor.colorAttachments[0].sourceAlphaBlendFactor = .sourceAlpha
renderPipelineDescriptor.colorAttachments[0].destinationAlphaBlendFactor = .oneMinusSourceAlpha
}
renderPipelineDescriptor.fragmentFunction = fragmentProgram
Finally, my render encoding:
brushTexture = MetalTexture(resourceName: "stamp_stipple1_0256", ext: "png", mipmaped: true)
brushTexture.loadTexture(device: device!, commandQ: commandQueue, flip: true)
renderCommandEncoder?.setRenderPipelineState(renderPipeline!)
renderCommandEncoder?.setVertexBuffer(vertexBuffer, offset: 0, index: 0)
renderCommandEncoder?.setFragmentTexture(brushTexture.texture, index: 0)
renderCommandEncoder?.setFragmentSamplerState(samplerState, index: 0)
Is there anything I'm missing? As stated earlier, this works as expected in "over" mode, but not in "additive" mode. Again, the desired effect is that to pass varying color/transparency settings to each stamp (pair of textured triangles).
Through trial and error, I arrived at the following settings to get what I was after:
// Settings for compositeOver
renderPipelineDescriptor.colorAttachments[0].sourceRGBBlendFactor = .one
renderPipelineDescriptor.colorAttachments[0].destinationRGBBlendFactor = .oneMinusSourceAlpha
renderPipelineDescriptor.colorAttachments[0].sourceAlphaBlendFactor = .one
renderPipelineDescriptor.colorAttachments[0].destinationAlphaBlendFactor = .oneMinusSourceAlpha
Also, because I was dealing with many overlapping stamps, I had to divide the color/alpha values by the number of overlaps in order to avoid over-saturation. I think this, more than anything was the reason i was not seeing color/alpha accumulation the way i expected.
stampColor = UIColor(red: (rgba.red * rgba.alpha / numOverlappingStamps), green: (rgba.green * rgba.alpha / numOverlappingStamps), blue: (rgba.blue * rgba.alpha / numOverlappingStamps), alpha: (rgba.alpha / numOverlappingStamps))
I am designing a Cocoa application using the swift 4.0 MetalKit API for macOS 10.13. Everything I report here was done on my 2015 MBPro.
I have successfully implemented an MTKView which renders simple geometry with low vertex count very well (Cubes, triangles, etc.). I implemented a mouse-drag based camera which rotates, strafes and magnifies. Here is a screenshot of the xcode FPS debug screen while I rotate the cube:
However, when I try loading a dataset which contains only ~1500 vertices (which are each stored as 7 x 32bit Floats... ie: 42 kB total), I start getting a very bad lag in FPS. I will show the code implementation lower. Here is a screenshot (note that on this image, the view only encompasses a few of the vertices, which are rendered as large points) :
Here is my implementation:
1) viewDidLoad() :
override func viewDidLoad() {
super.viewDidLoad()
// Initialization of the projection matrix and camera
self.projectionMatrix = float4x4.makePerspectiveViewAngle(float4x4.degrees(toRad: 85.0),
aspectRatio: Float(self.view.bounds.size.width / self.view.bounds.size.height),
nearZ: 0.01, farZ: 100.0)
self.vCam = ViewCamera()
// Initialization of the MTLDevice
metalView.device = MTLCreateSystemDefaultDevice()
device = metalView.device
metalView.colorPixelFormat = .bgra8Unorm
// Initialization of the shader library
let defaultLibrary = device.makeDefaultLibrary()!
let fragmentProgram = defaultLibrary.makeFunction(name: "basic_fragment")
let vertexProgram = defaultLibrary.makeFunction(name: "basic_vertex")
// Initialization of the MTLRenderPipelineState
let pipelineStateDescriptor = MTLRenderPipelineDescriptor()
pipelineStateDescriptor.vertexFunction = vertexProgram
pipelineStateDescriptor.fragmentFunction = fragmentProgram
pipelineStateDescriptor.colorAttachments[0].pixelFormat = .bgra8Unorm
pipelineState = try! device.makeRenderPipelineState(descriptor: pipelineStateDescriptor)
// Initialization of the MTLCommandQueue
commandQueue = device.makeCommandQueue()
// Initialization of Delegates and BufferProvider for View and Projection matrix MTLBuffer
self.metalView.delegate = self
self.metalView.eventDelegate = self
self.bufferProvider = BufferProvider(device: device, inflightBuffersCount: 3, sizeOfUniformsBuffer: MemoryLayout<Float>.size * float4x4.numberOfElements() * 2)
}
2) Loading of the MTLBuffer for the Cube vertices :
private func makeCubeVertexBuffer() {
let cube = Cube()
let vertices = cube.verticesArray
var vertexData = Array<Float>()
for vertex in vertices{
vertexData += vertex.floatBuffer()
}
VDataSize = vertexData.count * MemoryLayout.size(ofValue: vertexData[0])
self.vertexBuffer = device.makeBuffer(bytes: vertexData, length: VDataSize!, options: [])!
self.vertexCount = vertices.count
}
3) Loading of the MTLBuffer for the dataset vertices. Note that I explicitly declare the storage mode of this buffer as Private in order to ensure efficient access to the data by the GPU since the CPU does not need to access the data once the buffer is loaded. Also, note that I am loading only 1/100th of the vertices in my actual dataset because the entire OS on my machine starts lagging when I try to load it entirely (only 4.2 MB of data).
public func loadDataset(datasetVolume: DatasetVolume) {
// Load dataset vertices
self.datasetVolume = datasetVolume
self.datasetVertexCount = self.datasetVolume!.vertexCount/100
let rgbaVertices = self.datasetVolume!.rgbaPixelVolume[0...(self.datasetVertexCount!-1)]
var vertexData = Array<Float>()
for vertex in rgbaVertices{
vertexData += vertex.floatBuffer()
}
let dataSize = vertexData.count * MemoryLayout.size(ofValue: vertexData[0])
// Make two MTLBuffer's: One with Shared storage mode in which data is initially loaded, and a second one with Private storage mode
self.datasetVertexBuffer = device.makeBuffer(bytes: vertexData, length: dataSize, options: MTLResourceOptions.storageModeShared)
self.datasetVertexBufferGPU = device.makeBuffer(length: dataSize, options: MTLResourceOptions.storageModePrivate)
// Create a MTLCommandBuffer and blit the vertex data from the Shared MTLBuffer to the Private MTLBuffer
let commandBuffer = self.commandQueue.makeCommandBuffer()
let blitEncoder = commandBuffer!.makeBlitCommandEncoder()
blitEncoder!.copy(from: self.datasetVertexBuffer!, sourceOffset: 0, to: self.datasetVertexBufferGPU!, destinationOffset: 0, size: dataSize)
blitEncoder!.endEncoding()
commandBuffer!.commit()
// Clean up
self.datasetLoaded = true
self.datasetVertexBuffer = nil
}
4) Finally, here is the render loop. Again, this is using MetalKit.
func draw(in view: MTKView) {
render(view.currentDrawable)
}
private func render(_ drawable: CAMetalDrawable?) {
guard let drawable = drawable else { return }
// Make sure an MTLBuffer for the View and Projection matrices is available
_ = self.bufferProvider?.availableResourcesSemaphore.wait(timeout: DispatchTime.distantFuture)
// Initialize common RenderPassDescriptor
let renderPassDescriptor = MTLRenderPassDescriptor()
renderPassDescriptor.colorAttachments[0].texture = drawable.texture
renderPassDescriptor.colorAttachments[0].loadAction = .clear
renderPassDescriptor.colorAttachments[0].clearColor = Colors.White
renderPassDescriptor.colorAttachments[0].storeAction = .store
// Initialize a CommandBuffer and add a CompletedHandler to release an MTLBuffer from the BufferProvider once the GPU is done processing this command
let commandBuffer = self.commandQueue.makeCommandBuffer()
commandBuffer?.addCompletedHandler { (_) in
self.bufferProvider?.availableResourcesSemaphore.signal()
}
// Update the View matrix and obtain an MTLBuffer for it and the projection matrix
let camViewMatrix = self.vCam.getLookAtMatrix()
let uniformBuffer = bufferProvider?.nextUniformsBuffer(projectionMatrix: projectionMatrix, camViewMatrix: camViewMatrix)
// Initialize a MTLParallelRenderCommandEncoder
let parallelEncoder = commandBuffer?.makeParallelRenderCommandEncoder(descriptor: renderPassDescriptor)
// Create a CommandEncoder for the cube vertices if its data is loaded
if self.cubeLoaded == true {
let cubeRenderEncoder = parallelEncoder?.makeRenderCommandEncoder()
cubeRenderEncoder!.setCullMode(MTLCullMode.front)
cubeRenderEncoder!.setRenderPipelineState(pipelineState)
cubeRenderEncoder!.setTriangleFillMode(MTLTriangleFillMode.fill)
cubeRenderEncoder!.setVertexBuffer(self.cubeVertexBuffer, offset: 0, index: 0)
cubeRenderEncoder!.setVertexBuffer(uniformBuffer, offset: 0, index: 1)
cubeRenderEncoder!.drawPrimitives(type: .triangle, vertexStart: 0, vertexCount: vertexCount!, instanceCount: self.cubeVertexCount!/3)
cubeRenderEncoder!.endEncoding()
}
// Create a CommandEncoder for the dataset vertices if its data is loaded
if self.datasetLoaded == true {
let rgbaVolumeRenderEncoder = parallelEncoder?.makeRenderCommandEncoder()
rgbaVolumeRenderEncoder!.setRenderPipelineState(pipelineState)
rgbaVolumeRenderEncoder!.setVertexBuffer( self.datasetVertexBufferGPU!, offset: 0, index: 0)
rgbaVolumeRenderEncoder!.setVertexBuffer(uniformBuffer, offset: 0, index: 1)
rgbaVolumeRenderEncoder!.drawPrimitives(type: .point, vertexStart: 0, vertexCount: datasetVertexCount!, instanceCount: datasetVertexCount!)
rgbaVolumeRenderEncoder!.endEncoding()
}
// End CommandBuffer encoding and commit task
parallelEncoder!.endEncoding()
commandBuffer!.present(drawable)
commandBuffer!.commit()
}
Alright, so these are the steps I have been through in trying to figure out what was causing the lag, keeping in mind that the lagging effect is proportional to the size of the dataset's vertex buffer:
I initially though it was due to the GPU not being able to access the memory quickly enough because it was in Shared storage mode -> I changed the dataset MTLBuffer to Private storage mode. This did not solve the problem.
I then though that the problem was due to the CPU spending too much time in my render() function. This could possibly be due to a problem with the BufferProvider or maybe because somehow the CPU was trying to somehow reprocess/reload the dataset vertex buffer every frame -> In order to check this, I used the Time Profiler in xcode's Instruments. Unfortunately, it seems that the problem is that the application calls this render method (in other words, MTKView's draw() method) only very rarely. Here are some screenshots :
The spike at ~10 seconds is when the cube is loaded
The spikes between ~25-35 seconds are when the dataset is loaded
This image (^) shows the activity between ~10-20 seconds, right after the cube was loaded. This is when the FPS is at ~60. You can see that the main thread spends around 53ms in the render() function during these 10 seconds.
This image (^) shows the activity between ~40-50 seconds, right after the dataset was loaded. This is when the FPS is < 10. You can see that the main thread spends around 4ms in the render() function during these 10 seconds. As you can see, none of the methods which are usually called from within this function are called (ie: the ones we can see called when only the cube is loaded, previous image). Of note, when I load the dataset, the time profiler's timer starts to jump (ie: it stops for a few seconds and then jumps to the current time... repeat).
So this is where I am. The problem seems to be that the CPU somehow gets overloaded with these 42 kB of data... recursively. I also did a test with the Allocator in xcode's Instruments. No signs of memory leak, as far as I could tell (You might have noticed that a lot of this is new to me).
Sorry for the convoluted post, I hope it's not too hard to follow. Thank you all in advance for your help.
Edit:
Here are my shaders, in case you would like to see them:
struct VertexIn{
packed_float3 position;
packed_float4 color;
};
struct VertexOut{
float4 position [[position]];
float4 color;
float size [[point_size]];
};
struct Uniforms{
float4x4 cameraMatrix;
float4x4 projectionMatrix;
};
vertex VertexOut basic_vertex(const device VertexIn* vertex_array [[ buffer(0) ]],
constant Uniforms& uniforms [[ buffer(1) ]],
unsigned int vid [[ vertex_id ]]) {
float4x4 cam_Matrix = uniforms.cameraMatrix;
float4x4 proj_Matrix = uniforms.projectionMatrix;
VertexIn VertexIn = vertex_array[vid];
VertexOut VertexOut;
VertexOut.position = proj_Matrix * cam_Matrix * float4(VertexIn.position,1);
VertexOut.color = VertexIn.color;
VertexOut.size = 15;
return VertexOut;
}
fragment half4 basic_fragment(VertexOut interpolated [[stage_in]]) {
return half4(interpolated.color[0], interpolated.color[1], interpolated.color[2], interpolated.color[3]);
}
I think the main problem is that you're telling Metal to do instanced drawing when you shouldn't be. This line:
rgbaVolumeRenderEncoder!.drawPrimitives(type: .point, vertexStart: 0, vertexCount: datasetVertexCount!, instanceCount: datasetVertexCount!)
is telling Metal to draw datasetVertexCount! instances of each of datasetVertexCount! vertexes. The GPU work is growing with the square of the vertex count. Also, since you don't make use of the instance ID to, for example, tweak the vertex position, all of these instances are identical and thus redundant.
I think the same applies to this line:
cubeRenderEncoder!.drawPrimitives(type: .triangle, vertexStart: 0, vertexCount: vertexCount!, instanceCount: self.cubeVertexCount!/3)
although it's not clear what self.cubeVertexCount! is and whether it grows with vertexCount. In any case, since it seems you're using the same pipeline state and thus same shaders which don't make use of the instance ID, it's still useless and wasteful.
Other things:
Why are you using MTLParallelRenderCommandEncoder when you're not actually using the parallelism that it enables? Don't do that.
Everywhere you're using the size method of MemoryLayout, you should almost certainly be using stride instead. And if you're computing the stride of a compound data structure, do not take the stride of one element of that structure and multiply by the number of elements. Take the stride of the whole data structure.
I'm rendering a geometry that has some translucent areas (alpha < 1) in a metalkit MTKView. If isBlendingEnabled is left as false in the descriptor for the render pipeline state, then everything displays as it should (albeit with all solid colours).
I'm aware that rendering translucent objects depends on the draw order. For the time being, I just want to test what alpha blending looks like with the translucent areas blended with what is already in the render buffer, even if it's just blending through to the background (at this point still just the clear colour).
However, when I try to enable blending, the makeRenderPipelineState fails with the following error:
Compiler failed to build request
Error Domain=CompilerError Code=1 "Fragment shader does not write to render target color(0), index(1) that is required for blending"
Here's the code that tries to build the pipeline state in the MTKView's delegate. Where it inherits properties from the MTKView, I've put the value of those properties in the comments
do {
let descriptor = MTLRenderPipelineDescriptor()
descriptor.vertexFunction = vertex
descriptor.fragmentFunction = fragment
descriptor.sampleCount = view.sampleCount // 4
descriptor.depthAttachmentPixelFormat = view.depthStencilPixelFormat //.depth32Float
let renderAttachment = descriptor.colorAttachments[0]
renderAttachment?.pixelFormat = view.colorPixelFormat //.bgra8Unorm
// following 7 lines cause makeRenderPipelineState to fail
renderAttachment?.isBlendingEnabled = true
renderAttachment?.alphaBlendOperation = .add
renderAttachment?.rgbBlendOperation = .add
renderAttachment?.sourceRGBBlendFactor = .sourceAlpha
renderAttachment?.sourceAlphaBlendFactor = .sourceAlpha
renderAttachment?.destinationRGBBlendFactor = .oneMinusSourceAlpha
renderAttachment?.destinationAlphaBlendFactor = .oneMinusSource1Alpha
computePipelineState = try device.makeComputePipelineState(function: kernel)
renderPipelineState = try device.makeRenderPipelineState(descriptor: descriptor)
} catch {
print(error)
}
Given that the error complains about color(0), I added the color[0] binding to the output of the fragment shader:
constant float3 directionalLight = float3(-50, -30, 80);
struct FragOut {
float4 solidColor [[ color(0) ]];
};
fragment FragOut passThroughFragment(Vertex fragIn [[ stage_in ]]) {
FragOut fragOut;
fragOut.solidColor = fragIn.color;
fragOut.solidColor.rgb *= max(0.4, dot(fragIn.normal, normalize(directionalLight)));
return fragOut;
};
And finally, the draw code:
if let renderPassDescriptor = view.currentRenderPassDescriptor,
let drawable = view.currentDrawable {
let commandBuffer = queue.makeCommandBuffer()
renderPassDescriptor.colorAttachments[0].clearColor = MTLClearColor(red: 0.8, green: 0.8, blue: 1, alpha: 1)
let renderEncoder = commandBuffer.makeRenderCommandEncoder(descriptor: renderPassDescriptor)
renderEncoder.setDepthStencilState(depthStencilState)
renderEncoder.setRenderPipelineState(renderPipelineState)
//renderEncoder.setTriangleFillMode(.lines)
renderEncoder.setVertexBuffer(sceneBuffer, offset: 0, at: 0)
renderEncoder.setVertexBuffer(vertexBuffer, offset: 0, at: 1)
renderEncoder.drawPrimitives(type: .triangle, vertexStart: 0, vertexCount: Int(vertexCount) )
renderEncoder.endEncoding()
commandBuffer.present(drawable)
commandBuffer.commit()
}
I don't explicitly set any textures in the fragment shader. Does this mean that the currentDrawable is implicitly the color attachment at index 0? Why does the error message want to see color(0) at index 1? Does blending require two color attachments? (it can't just blend additively onto what has already been rendered?)
Thanks.
You seem to be inadvertently invoking dual-source blending. Rather than setting the destinationAlphaBlendFactor to oneMinusSource1Alpha, try oneMinusSourceAlpha (note the missing 1).
Also, your intuition that Metal writes to the first color attachment by default is correct (the current drawable is configured by MTKView to be the texture of the first color attachment). Rather than returning a struct with a member that is attributed with [[color(0)]], you can just return a float4 (or half4) from your fragment function, and that color will be written to the primary color attachment. However, the way you've written it should work once your blend factors are configured correctly.