How to Extract SceneKit Depth Buffer at runtime in AR scene? - swift

How does one extract the SceneKit depth buffer? I make an AR based app that is running Metal and I'm really struggling to find any info on how to extract a 2D depth buffer so I can render out fancy 3D photos of my scenes. Any help greatly appreciated.

Your question is unclear but I'll try to answer.
Depth pass from VR view
If you need to render a Depth pass from SceneKit's 3D environment then you should use, for instance, a SCNGeometrySource.Semantic structure. There are vertex, normal, texcoord, color and tangent type properties. Let's see what a vertex type property is:
static let vertex: SCNGeometrySource.Semantic
This semantic identifies data containing the positions of each vertex in the geometry. For a custom shader program, you use this semantic to bind SceneKit’s vertex position data to an input attribute of the shader. Vertex position data is typically an array of three- or four-component vectors.
Here's a code's excerpt from iOS Depth Sample project.
UPDATED: Using this code you can get a position for every point in SCNScene and assign a color for these points (this is what a zDepth channel really is):
import SceneKit
struct PointCloudVertex {
var x: Float, y: Float, z: Float
var r: Float, g: Float, b: Float
}
#objc class PointCloud: NSObject {
var pointCloud : [SCNVector3] = []
var colors: [UInt8] = []
public func pointCloudNode() -> SCNNode {
let points = self.pointCloud
var vertices = Array(repeating: PointCloudVertex(x: 0,
y: 0,
z: 0,
r: 0,
g: 0,
b: 0),
count: points.count)
for i in 0...(points.count-1) {
let p = points[i]
vertices[i].x = Float(p.x)
vertices[i].y = Float(p.y)
vertices[i].z = Float(p.z)
vertices[i].r = Float(colors[i * 4]) / 255.0
vertices[i].g = Float(colors[i * 4 + 1]) / 255.0
vertices[i].b = Float(colors[i * 4 + 2]) / 255.0
}
let node = buildNode(points: vertices)
return node
}
private func buildNode(points: [PointCloudVertex]) -> SCNNode {
let vertexData = NSData(
bytes: points,
length: MemoryLayout<PointCloudVertex>.size * points.count
)
let positionSource = SCNGeometrySource(
data: vertexData as Data,
semantic: SCNGeometrySource.Semantic.vertex,
vectorCount: points.count,
usesFloatComponents: true,
componentsPerVector: 3,
bytesPerComponent: MemoryLayout<Float>.size,
dataOffset: 0,
dataStride: MemoryLayout<PointCloudVertex>.size
)
let colorSource = SCNGeometrySource(
data: vertexData as Data,
semantic: SCNGeometrySource.Semantic.color,
vectorCount: points.count,
usesFloatComponents: true,
componentsPerVector: 3,
bytesPerComponent: MemoryLayout<Float>.size,
dataOffset: MemoryLayout<Float>.size * 3,
dataStride: MemoryLayout<PointCloudVertex>.size
)
let element = SCNGeometryElement(
data: nil,
primitiveType: .point,
primitiveCount: points.count,
bytesPerIndex: MemoryLayout<Int>.size
)
element.pointSize = 1
element.minimumPointScreenSpaceRadius = 1
element.maximumPointScreenSpaceRadius = 5
let pointsGeometry = SCNGeometry(sources: [positionSource, colorSource], elements: [element])
return SCNNode(geometry: pointsGeometry)
}
}
Depth pass from AR view
If you need to render a Depth pass from ARSCNView it is possible only in case you're using ARFaceTrackingConfiguration for the front-facing camera. If so, then you can employ capturedDepthData instance property that brings you a depth map, captured along with the video frame.
var capturedDepthData: AVDepthData? { get }
But this depth map image is only 15 fps and of lower resolution than corresponding RGB image at 60 fps.
Face-based AR uses the front-facing, depth-sensing camera on compatible devices. When running such a configuration, frames vended by the session contain a depth map captured by the depth camera in addition to the color pixel buffer (see capturedImage) captured by the color camera. This property’s value is always nil when running other AR configurations.
And a real code could be like this:
extension ViewController: ARSCNViewDelegate {
func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
DispatchQueue.global().async {
guard let frame = self.sceneView.session.currentFrame else {
return
}
if let depthImage = frame.capturedDepthData {
self.depthImage = (depthImage as! CVImageBuffer)
}
}
}
}
Depth pass from Video view
Also, you can extract a true Depth pass using 2 back-facing cameras and AVFoundation framework.
Look at Image Depth Map tutorial where Disparity concept will be introduced to you.

Related

Save ARFaceGeometry to OBJ file

In an iOS ARKit app, I've been trying to save the ARFaceGeometry data to an OBJ file. I followed the explanation here: How to make a 3D model from AVDepthData?. However, the OBJ isn't created correctly. Here's what I have:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
guard let faceAnchor = anchor as? ARFaceAnchor else { return }
currentFaceAnchor = faceAnchor
// If this is the first time with this anchor, get the controller to create content.
// Otherwise (switching content), will change content when setting `selectedVirtualContent`.
if node.childNodes.isEmpty, let contentNode = selectedContentController.renderer(renderer, nodeFor: faceAnchor) {
node.addChildNode(contentNode)
}
// https://stackoverflow.com/questions/52953590/how-to-make-a-3d-model-from-avdepthdata
let geometry = faceAnchor.geometry
let allocator = MDLMeshBufferDataAllocator()
let vertices = allocator.newBuffer(with: Data(fromArray: geometry.vertices), type: .vertex)
let textureCoordinates = allocator.newBuffer(with: Data(fromArray: geometry.textureCoordinates), type: .vertex)
let triangleIndices = allocator.newBuffer(with: Data(fromArray: geometry.triangleIndices), type: .index)
let submesh = MDLSubmesh(indexBuffer: triangleIndices, indexCount: geometry.triangleIndices.count, indexType: .uInt16, geometryType: .triangles, material: MDLMaterial(name: "mat1", scatteringFunction: MDLPhysicallyPlausibleScatteringFunction()))
let vertexDescriptor = MDLVertexDescriptor()
// Attributes
vertexDescriptor.addOrReplaceAttribute(MDLVertexAttribute(name: MDLVertexAttributePosition, format: .float3, offset: 0, bufferIndex: 0))
vertexDescriptor.addOrReplaceAttribute(MDLVertexAttribute(name: MDLVertexAttributeNormal, format: .float3, offset: MemoryLayout<float3>.stride, bufferIndex: 0))
vertexDescriptor.addOrReplaceAttribute(MDLVertexAttribute(name: MDLVertexAttributeTextureCoordinate, format: .float2, offset: MemoryLayout<float3>.stride + MemoryLayout<float3>.stride, bufferIndex: 0))
// Layouts
vertexDescriptor.layouts.add(MDLVertexBufferLayout(stride: MemoryLayout<float3>.stride + MemoryLayout<float3>.stride + MemoryLayout<float2>.stride))
let mdlMesh = MDLMesh(vertexBuffers: [vertices, textureCoordinates], vertexCount: geometry.vertices.count, descriptor: vertexDescriptor, submeshes: [submesh])
mdlMesh.addNormals(withAttributeNamed: MDLVertexAttributeNormal, creaseThreshold: 0.5)
let asset = MDLAsset(bufferAllocator: allocator)
asset.add(mdlMesh)
let documentsPath = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first!
let exportUrl = documentsPath.appendingPathComponent("face.obj")
try! asset.export(to: exportUrl)
}
The resulting OBJ file looks like this:
# Apple ModelIO OBJ File: face
mtllib face.mtl
g
v -0.000128156 -0.0277879 0.0575149
vn 0 0 0
vt -9.36008e-05 -0.0242016
usemtl material_1
f 1/1/1 1/1/1 1/1/1
f 1/1/1 1/1/1 1/1/1
f 1/1/1 1/1/1 1/1/1
... and many more lines
I would expect many more vertices, and the index values look wrong.
The core issue is that your vertex data isn't described correctly. When you provide a vertex descriptor to Model I/O while constructing a mesh, it represents the layout the data actually has, not your desired layout. You're supplying two vertex buffers, but your vertex descriptor describes an interleaved data layout with only one vertex buffer.
The easiest way to remedy this is to fix the vertex descriptor to reflect the data you're providing:
let vertexDescriptor = MDLVertexDescriptor()
// Attributes
vertexDescriptor.attributes[0] = MDLVertexAttribute(name: MDLVertexAttributePosition,
format: .float3,
offset: 0,
bufferIndex: 0)
vertexDescriptor.attributes[1] = MDLVertexAttribute(name: MDLVertexAttributeTextureCoordinate,
format: .float2,
offset: 0,
bufferIndex: 1)
// Layouts
vertexDescriptor.layouts[0] = MDLVertexBufferLayout(stride: MemoryLayout<float3>.stride)
vertexDescriptor.layouts[1] = MDLVertexBufferLayout(stride: MemoryLayout<float2>.stride)
When you later call addNormals(...), Model I/O will allocate the necessary space and update the vertex descriptor to reflect the new data. Since you're not rendering from the data and are instead immediately exporting it, the internal layout it chooses for the normals isn't important.

How to apply a 3D Model on detected face by Apple Vision "NO AR"

With iPhoneX True-Depth camera its possible to get the 3D Coordinates of any object and use that information to position and scale the object, but with older iPhones we don't have access to AR on front-face camera, what i've done so far was detecting the face using Apple Vison frame work and drawing some 2D paths around the face or landmarks.
i've made a SceneView and Applied that as front layer of My view with clear background, and beneath it is AVCaptureVideoPreviewLayer , after detecting the face my 3D Object appears on the screen but positioning and scaling it correctly according to the face boundingBox required unprojecting and other stuffs which i got stuck there, i've also tried converting the 2D BoundingBox to 3D using CATransform3D but i failed! i am wondering if what i want to achieve is even possible ? i remember SnapChat was doing this before ARKit was available on iPhone if i'm not wrong!
override func viewDidLoad() {
super.viewDidLoad()
self.view.addSubview(self.sceneView)
self.sceneView.frame = self.view.bounds
self.sceneView.backgroundColor = .clear
self.node = self.scene.rootNode.childNode(withName: "face",
recursively: true)!
}
fileprivate func updateFaceView(for result:
VNFaceObservation, twoDFace: Face2D) {
let box = convert(rect: result.boundingBox)
defer {
DispatchQueue.main.async {
self.faceView.setNeedsDisplay()
}
}
faceView.boundingBox = box
self.sceneView.scene?.rootNode.addChildNode(self.node)
let unprojectedBox = SCNVector3(box.origin.x, box.origin.y,
0.8)
let worldPoint = sceneView.unprojectPoint(unprojectedBox)
self.node.position = worldPoint
/* Here i have to to unprojecting
to convert the value from a 2D point to 3D point also
issue here. */
}
The only way to achieve this is to SceneKit with an ortographic camera and use SCNGeometrySource to match the landmarks from Vision to the vertices of the mesh.
First, you need the mesh with the same number of vertices of Vision (66 - 77 depending on which Vision Revision you're in). You can create one using a tool like Blender.
The mesh on Blender
Then, on code, on each time you process your landmarks, you do the steps:
1- Get the mesh vertices:
func getVertices() -> [SCNVector3]{
var result = [SCNVector3]()
let planeSources = shape!.geometry?.sources(for: SCNGeometrySource.Semantic.vertex)
if let planeSource = planeSources?.first {
let stride = planeSource.dataStride
let offset = planeSource.dataOffset
let componentsPerVector = planeSource.componentsPerVector
let bytesPerVector = componentsPerVector * planeSource.bytesPerComponent
let vectors = [SCNVector3](repeating: SCNVector3Zero, count: planeSource.vectorCount)
// [SCNVector3](count: planeSource.vectorCount, repeatedValue: SCNVector3Zero)
let vertices = vectors.enumerated().map({
(index: Int, element: SCNVector3) -> SCNVector3 in
var vectorData = [Float](repeating: 0, count: componentsPerVector)
let byteRange = NSMakeRange(index * stride + offset, bytesPerVector)
let data = planeSource.data
(data as NSData).getBytes(&vectorData, range: byteRange)
return SCNVector3( x: vectorData[0], y: vectorData[1], z: vectorData[2])
})
result = vertices
}
return result
}
2- Unproject each landmark captured by Vision and keep them in a SCNVector3 array:
let unprojectedLandmark = sceneView.unprojectPoint( SCNVector3(landmarks[i].x + (landmarks[i].x,landmarks[i].y,0))
3- Modify the geometry using the new vertices:
func reshapeGeometry( _ vertices: [SCNVector3] ){
let source = SCNGeometrySource(vertices: vertices)
var newSources = [SCNGeometrySource]()
newSources.append(source)
for source in shape!.geometry!.sources {
if (source.semantic != SCNGeometrySource.Semantic.vertex) {
newSources.append(source)
}
}
let geometry = SCNGeometry(sources: newSources, elements: shape!.geometry?.elements)
let material = shape!.geometry?.firstMaterial
shape!.geometry = geometry
shape!.geometry?.firstMaterial = material
}
I was able to do that and that was my method.
Hope this helps!
I would suggest looking at Google's AR Core products which support an Apple AR scene with the back or front facing camera...but adds some additional functionality beyond Apple, when it comes to non Face depth camera devices.
Apple's Core Vision is almost the same as Googles Core Vision framework which returns 2D points representing the eyes/mouth/nose etc...and a face tilt component.
However, if you want a way to simply apply either 2D textures to a responsive 3D face, or alternatively attach 3D models to points on the face then take a look at Google's Core Augmented Faces framework. It has great sample code on iOS and Android.

What can cause lag in recurrent calls to the draw() function of a MetalKit MTKView

I am designing a Cocoa application using the swift 4.0 MetalKit API for macOS 10.13. Everything I report here was done on my 2015 MBPro.
I have successfully implemented an MTKView which renders simple geometry with low vertex count very well (Cubes, triangles, etc.). I implemented a mouse-drag based camera which rotates, strafes and magnifies. Here is a screenshot of the xcode FPS debug screen while I rotate the cube:
However, when I try loading a dataset which contains only ~1500 vertices (which are each stored as 7 x 32bit Floats... ie: 42 kB total), I start getting a very bad lag in FPS. I will show the code implementation lower. Here is a screenshot (note that on this image, the view only encompasses a few of the vertices, which are rendered as large points) :
Here is my implementation:
1) viewDidLoad() :
override func viewDidLoad() {
super.viewDidLoad()
// Initialization of the projection matrix and camera
self.projectionMatrix = float4x4.makePerspectiveViewAngle(float4x4.degrees(toRad: 85.0),
aspectRatio: Float(self.view.bounds.size.width / self.view.bounds.size.height),
nearZ: 0.01, farZ: 100.0)
self.vCam = ViewCamera()
// Initialization of the MTLDevice
metalView.device = MTLCreateSystemDefaultDevice()
device = metalView.device
metalView.colorPixelFormat = .bgra8Unorm
// Initialization of the shader library
let defaultLibrary = device.makeDefaultLibrary()!
let fragmentProgram = defaultLibrary.makeFunction(name: "basic_fragment")
let vertexProgram = defaultLibrary.makeFunction(name: "basic_vertex")
// Initialization of the MTLRenderPipelineState
let pipelineStateDescriptor = MTLRenderPipelineDescriptor()
pipelineStateDescriptor.vertexFunction = vertexProgram
pipelineStateDescriptor.fragmentFunction = fragmentProgram
pipelineStateDescriptor.colorAttachments[0].pixelFormat = .bgra8Unorm
pipelineState = try! device.makeRenderPipelineState(descriptor: pipelineStateDescriptor)
// Initialization of the MTLCommandQueue
commandQueue = device.makeCommandQueue()
// Initialization of Delegates and BufferProvider for View and Projection matrix MTLBuffer
self.metalView.delegate = self
self.metalView.eventDelegate = self
self.bufferProvider = BufferProvider(device: device, inflightBuffersCount: 3, sizeOfUniformsBuffer: MemoryLayout<Float>.size * float4x4.numberOfElements() * 2)
}
2) Loading of the MTLBuffer for the Cube vertices :
private func makeCubeVertexBuffer() {
let cube = Cube()
let vertices = cube.verticesArray
var vertexData = Array<Float>()
for vertex in vertices{
vertexData += vertex.floatBuffer()
}
VDataSize = vertexData.count * MemoryLayout.size(ofValue: vertexData[0])
self.vertexBuffer = device.makeBuffer(bytes: vertexData, length: VDataSize!, options: [])!
self.vertexCount = vertices.count
}
3) Loading of the MTLBuffer for the dataset vertices. Note that I explicitly declare the storage mode of this buffer as Private in order to ensure efficient access to the data by the GPU since the CPU does not need to access the data once the buffer is loaded. Also, note that I am loading only 1/100th of the vertices in my actual dataset because the entire OS on my machine starts lagging when I try to load it entirely (only 4.2 MB of data).
public func loadDataset(datasetVolume: DatasetVolume) {
// Load dataset vertices
self.datasetVolume = datasetVolume
self.datasetVertexCount = self.datasetVolume!.vertexCount/100
let rgbaVertices = self.datasetVolume!.rgbaPixelVolume[0...(self.datasetVertexCount!-1)]
var vertexData = Array<Float>()
for vertex in rgbaVertices{
vertexData += vertex.floatBuffer()
}
let dataSize = vertexData.count * MemoryLayout.size(ofValue: vertexData[0])
// Make two MTLBuffer's: One with Shared storage mode in which data is initially loaded, and a second one with Private storage mode
self.datasetVertexBuffer = device.makeBuffer(bytes: vertexData, length: dataSize, options: MTLResourceOptions.storageModeShared)
self.datasetVertexBufferGPU = device.makeBuffer(length: dataSize, options: MTLResourceOptions.storageModePrivate)
// Create a MTLCommandBuffer and blit the vertex data from the Shared MTLBuffer to the Private MTLBuffer
let commandBuffer = self.commandQueue.makeCommandBuffer()
let blitEncoder = commandBuffer!.makeBlitCommandEncoder()
blitEncoder!.copy(from: self.datasetVertexBuffer!, sourceOffset: 0, to: self.datasetVertexBufferGPU!, destinationOffset: 0, size: dataSize)
blitEncoder!.endEncoding()
commandBuffer!.commit()
// Clean up
self.datasetLoaded = true
self.datasetVertexBuffer = nil
}
4) Finally, here is the render loop. Again, this is using MetalKit.
func draw(in view: MTKView) {
render(view.currentDrawable)
}
private func render(_ drawable: CAMetalDrawable?) {
guard let drawable = drawable else { return }
// Make sure an MTLBuffer for the View and Projection matrices is available
_ = self.bufferProvider?.availableResourcesSemaphore.wait(timeout: DispatchTime.distantFuture)
// Initialize common RenderPassDescriptor
let renderPassDescriptor = MTLRenderPassDescriptor()
renderPassDescriptor.colorAttachments[0].texture = drawable.texture
renderPassDescriptor.colorAttachments[0].loadAction = .clear
renderPassDescriptor.colorAttachments[0].clearColor = Colors.White
renderPassDescriptor.colorAttachments[0].storeAction = .store
// Initialize a CommandBuffer and add a CompletedHandler to release an MTLBuffer from the BufferProvider once the GPU is done processing this command
let commandBuffer = self.commandQueue.makeCommandBuffer()
commandBuffer?.addCompletedHandler { (_) in
self.bufferProvider?.availableResourcesSemaphore.signal()
}
// Update the View matrix and obtain an MTLBuffer for it and the projection matrix
let camViewMatrix = self.vCam.getLookAtMatrix()
let uniformBuffer = bufferProvider?.nextUniformsBuffer(projectionMatrix: projectionMatrix, camViewMatrix: camViewMatrix)
// Initialize a MTLParallelRenderCommandEncoder
let parallelEncoder = commandBuffer?.makeParallelRenderCommandEncoder(descriptor: renderPassDescriptor)
// Create a CommandEncoder for the cube vertices if its data is loaded
if self.cubeLoaded == true {
let cubeRenderEncoder = parallelEncoder?.makeRenderCommandEncoder()
cubeRenderEncoder!.setCullMode(MTLCullMode.front)
cubeRenderEncoder!.setRenderPipelineState(pipelineState)
cubeRenderEncoder!.setTriangleFillMode(MTLTriangleFillMode.fill)
cubeRenderEncoder!.setVertexBuffer(self.cubeVertexBuffer, offset: 0, index: 0)
cubeRenderEncoder!.setVertexBuffer(uniformBuffer, offset: 0, index: 1)
cubeRenderEncoder!.drawPrimitives(type: .triangle, vertexStart: 0, vertexCount: vertexCount!, instanceCount: self.cubeVertexCount!/3)
cubeRenderEncoder!.endEncoding()
}
// Create a CommandEncoder for the dataset vertices if its data is loaded
if self.datasetLoaded == true {
let rgbaVolumeRenderEncoder = parallelEncoder?.makeRenderCommandEncoder()
rgbaVolumeRenderEncoder!.setRenderPipelineState(pipelineState)
rgbaVolumeRenderEncoder!.setVertexBuffer( self.datasetVertexBufferGPU!, offset: 0, index: 0)
rgbaVolumeRenderEncoder!.setVertexBuffer(uniformBuffer, offset: 0, index: 1)
rgbaVolumeRenderEncoder!.drawPrimitives(type: .point, vertexStart: 0, vertexCount: datasetVertexCount!, instanceCount: datasetVertexCount!)
rgbaVolumeRenderEncoder!.endEncoding()
}
// End CommandBuffer encoding and commit task
parallelEncoder!.endEncoding()
commandBuffer!.present(drawable)
commandBuffer!.commit()
}
Alright, so these are the steps I have been through in trying to figure out what was causing the lag, keeping in mind that the lagging effect is proportional to the size of the dataset's vertex buffer:
I initially though it was due to the GPU not being able to access the memory quickly enough because it was in Shared storage mode -> I changed the dataset MTLBuffer to Private storage mode. This did not solve the problem.
I then though that the problem was due to the CPU spending too much time in my render() function. This could possibly be due to a problem with the BufferProvider or maybe because somehow the CPU was trying to somehow reprocess/reload the dataset vertex buffer every frame -> In order to check this, I used the Time Profiler in xcode's Instruments. Unfortunately, it seems that the problem is that the application calls this render method (in other words, MTKView's draw() method) only very rarely. Here are some screenshots :
The spike at ~10 seconds is when the cube is loaded
The spikes between ~25-35 seconds are when the dataset is loaded
This image (^) shows the activity between ~10-20 seconds, right after the cube was loaded. This is when the FPS is at ~60. You can see that the main thread spends around 53ms in the render() function during these 10 seconds.
This image (^) shows the activity between ~40-50 seconds, right after the dataset was loaded. This is when the FPS is < 10. You can see that the main thread spends around 4ms in the render() function during these 10 seconds. As you can see, none of the methods which are usually called from within this function are called (ie: the ones we can see called when only the cube is loaded, previous image). Of note, when I load the dataset, the time profiler's timer starts to jump (ie: it stops for a few seconds and then jumps to the current time... repeat).
So this is where I am. The problem seems to be that the CPU somehow gets overloaded with these 42 kB of data... recursively. I also did a test with the Allocator in xcode's Instruments. No signs of memory leak, as far as I could tell (You might have noticed that a lot of this is new to me).
Sorry for the convoluted post, I hope it's not too hard to follow. Thank you all in advance for your help.
Edit:
Here are my shaders, in case you would like to see them:
struct VertexIn{
packed_float3 position;
packed_float4 color;
};
struct VertexOut{
float4 position [[position]];
float4 color;
float size [[point_size]];
};
struct Uniforms{
float4x4 cameraMatrix;
float4x4 projectionMatrix;
};
vertex VertexOut basic_vertex(const device VertexIn* vertex_array [[ buffer(0) ]],
constant Uniforms& uniforms [[ buffer(1) ]],
unsigned int vid [[ vertex_id ]]) {
float4x4 cam_Matrix = uniforms.cameraMatrix;
float4x4 proj_Matrix = uniforms.projectionMatrix;
VertexIn VertexIn = vertex_array[vid];
VertexOut VertexOut;
VertexOut.position = proj_Matrix * cam_Matrix * float4(VertexIn.position,1);
VertexOut.color = VertexIn.color;
VertexOut.size = 15;
return VertexOut;
}
fragment half4 basic_fragment(VertexOut interpolated [[stage_in]]) {
return half4(interpolated.color[0], interpolated.color[1], interpolated.color[2], interpolated.color[3]);
}
I think the main problem is that you're telling Metal to do instanced drawing when you shouldn't be. This line:
rgbaVolumeRenderEncoder!.drawPrimitives(type: .point, vertexStart: 0, vertexCount: datasetVertexCount!, instanceCount: datasetVertexCount!)
is telling Metal to draw datasetVertexCount! instances of each of datasetVertexCount! vertexes. The GPU work is growing with the square of the vertex count. Also, since you don't make use of the instance ID to, for example, tweak the vertex position, all of these instances are identical and thus redundant.
I think the same applies to this line:
cubeRenderEncoder!.drawPrimitives(type: .triangle, vertexStart: 0, vertexCount: vertexCount!, instanceCount: self.cubeVertexCount!/3)
although it's not clear what self.cubeVertexCount! is and whether it grows with vertexCount. In any case, since it seems you're using the same pipeline state and thus same shaders which don't make use of the instance ID, it's still useless and wasteful.
Other things:
Why are you using MTLParallelRenderCommandEncoder when you're not actually using the parallelism that it enables? Don't do that.
Everywhere you're using the size method of MemoryLayout, you should almost certainly be using stride instead. And if you're computing the stride of a compound data structure, do not take the stride of one element of that structure and multiply by the number of elements. Take the stride of the whole data structure.

Keep relative positions of SKSpriteNode from SKShapeNode from CGPath

I have an array of array of coordinates (from a shapefile), which I'm trying to make into a series of SKSpriteNodes.
My problem is that I need to keep the relative positions of each of the shapes in the array. If I use SKShapeNodes, it works, as they are just created directly from the path I trace, but their resources consumption is quite high, and, in particular I cannot use lighting effects on them.
If I use SKSpriteNodes with a texture created from the shape node, then I lose their relative positions.
I tried calculating the center of each shapes, but their positions are still not accurate.
Here's how I draw them so far:
override func didMoveToView(view: SKView)
{
self.backgroundColor = SKColor.blackColor()
let shapeCoordinates:[[(CGFloat, CGFloat)]] = [[(900.66563867095, 401.330302084953), (880.569690215615, 400.455067051099), (879.599839322167, 408.266821560754), (878.358968429675, 418.182833936104), (899.37522863267, 418.54861454054), (900.66563867095, 401.330302084953)],
[(879.599839322167, 408.266821560754), (869.991637153925, 408.122907880045), (870.320569111933, 400.161243286459), (868.569953361733, 400.11339198742), (864.517810669155, 399.54973007215), (858.682258706015, 397.619367903278), (855.665753299048, 395.808813873244), (853.479452218432, 392.811211835046), (847.923492419877, 394.273974470316), (834.320860167515, 397.859104108813), (826.495867917475, 399.921507079808), (829.86572598778, 404.531781837208), (835.898936154083, 409.178035013947), (840.887737516875, 411.839958392806), (847.191868005112, 414.441797809335), (854.251943938193, 416.198384209245), (860.095769038325, 417.277496957155), (866.21091316512, 417.954970608037), (873.27118845149, 418.182833936104), (878.358968429675, 418.182833936104), (879.599839322167, 408.266821560754)],
[(931.018881691707, 402.151689416542), (910.610746904717, 401.600140235583), (910.380693886848, 411.576056681467), (930.79710181083, 411.750012342223), (931.018881691707, 402.151689416542)],
[(880.569690215615, 400.455067051099), (870.320569111933, 400.161243286459), (869.991637153925, 408.122907880045), (879.599839322167, 408.266821560754), (880.569690215615, 400.455067051099)]]
for shapeCoord in shapeCoordinates
{
let path = CGPathCreateMutable()
var center:(CGFloat, CGFloat) = (0, 0)
for i in 0...(shapeCoord.count - 1)
{
let x = shapeCoord[i].0
let y = shapeCoord[i].1
center.0 += x
center.1 += y
if i == 0
{
CGPathMoveToPoint(path, nil, x, y)
}
else
{
CGPathAddLineToPoint(path, nil, x, y)
}
}
center.0 /= CGFloat(shapeCoord.count)
center.1 /= CGFloat(shapeCoord.count)
let shape = SKShapeNode(path: path)
let texture = self.view?.textureFromNode(shape)
let sprite = SKSpriteNode(texture: texture)
sprite.position = CGPointMake(center.0, center.1)
//self.addChild(shape)
self.addChild(sprite)
}
}
Is it feasible or should I switch to another technology / method?

How to create a SceneKit SCNSkinner object in code?

I have a Swift app using SceneKit for iOS 8. I load a scene from a .dae file that contains a mesh controlled by a skeleton.
At runtime, I need to modify the texture coordinates. Using a transform is not an option -- I need to compute a different, completely new UV for each vertex in the mesh.
I know geometry is immutable in SceneKit, and I've read that the suggested approach is to make a copy manually. I'm trying to do that, but I always end up crashing when trying to re-create the SCNSkinner in code. The crash is an EXC_BAD_ACCESS inside C3DSourceAccessorGetMutableValuePtrAtIndex. Unfortunately, there is no source code for this, so I'm not sure why exactly it's crashing. I've narrowed it down to the SCNSkinner object attached to the mesh node. If I do not set that, I don't get a crash and things appear to be working.
EDIT: Here is a more complete call stack of the crash:
C3DSourceAccessorGetMutableValuePtrAtIndex
C3DSkinPrepareMeshForGPUIfNeeded
C3DSkinnerMakeCurrentMesh
C3DSkinnerUpdateCurrentMesh
__CFSetApplyFunction_block_invoke
CFBasicHashApply
CFSetApplyFunction
C3DAppleEngineRenderScene
...
I've not found any documentation or example code about how to create an SCNSkinner object manually. Since I'm just creating it based on a previously working mesh, it shouldn't be too difficult. I'm creating the SCNSkinner according to the Swift documentation, passing all of the correct things into the init. However, there is a skeleton property in the SCNSkinner that I'm not sure how to set. I set it to the skeleton that was on the original SCNSkinner of the mesh I'm copying, which I think
should work... but it doesn't. When setting the skeleton property, it does not appear to be assigning. Checking it immediately after the assignment shows that it is still nil. As a test, I tried to set the original mesh's skeleton property to something else, and after the assignment it was left untouched as well.
Can anyone shed any light on what is happening? Or how to correctly create and set up an SCNSkinner object manually?
Here is the code I'm using to manually clone a mesh and replace it with a new one (I have not modified any of the source data here -- I'm simply trying to make sure I can create a copy at this point):
// This is at the start of the app, just so you can see how the scene is set up.
// I add the .dae contents into its own node in the scene. This seems to be the
// standard way to put multiple .dae models into the same scene. This doesn't seem to
// have any impact on the problem I'm having -- I've tried without this indirection
// and the same problem exists.
let scene = SCNScene()
let modelNode = SCNNode()
modelNode.name = "ModelNode"
scene.rootNode.addChildNode(modelNode)
let modelScene = SCNScene(named: "model.dae")
if modelScene != nil {
if let childNodes = modelScene?.rootNode.childNodes {
for childNode in childNodes {
modelNode.addChildNode(childNode as SCNNode)
}
}
}
// This happens later in the app after a tap from the user.
let modelNode = scnView.scene!.rootNode.childNodeWithName("ModelNode", recursively: true)
let modelMesh = modelNode?.childNodeWithName("MeshName", recursively: true)
let verts = modelMesh?.geometry!.geometrySourcesForSemantic(SCNGeometrySourceSemanticVertex)
let normals = modelMesh?.geometry!.geometrySourcesForSemantic(SCNGeometrySourceSemanticNormal)
let texcoords = modelMesh?.geometry!.geometrySourcesForSemantic(SCNGeometrySourceSemanticTexcoord)
let boneWeights = modelMesh?.geometry!.geometrySourcesForSemantic(SCNGeometrySourceSemanticBoneWeights)
let boneIndices = modelMesh?.geometry!.geometrySourcesForSemantic(SCNGeometrySourceSemanticBoneIndices)
let geometry = modelMesh?.geometry!.geometryElementAtIndex(0)
// Note: the vertex and normal data is shared.
let vertsData = NSData(data: verts![0].data)
let texcoordsData = NSData(data: texcoords![0].data)
let boneWeightsData = NSData(data: boneWeights![0].data)
let boneIndicesData = NSData(data: boneIndices![0].data)
let geometryData = NSData(data: geometry!.data!)
let newVerts = SCNGeometrySource(data: vertsData, semantic: SCNGeometrySourceSemanticVertex, vectorCount: verts![0].vectorCount, floatComponents: verts![0].floatComponents, componentsPerVector: verts![0].componentsPerVector, bytesPerComponent: verts![0].bytesPerComponent, dataOffset: verts![0].dataOffset, dataStride: verts![0].dataStride)
let newNormals = SCNGeometrySource(data: vertsData, semantic: SCNGeometrySourceSemanticNormal, vectorCount: normals![0].vectorCount, floatComponents: normals![0].floatComponents, componentsPerVector: normals![0].componentsPerVector, bytesPerComponent: normals![0].bytesPerComponent, dataOffset: normals![0].dataOffset, dataStride: normals![0].dataStride)
let newTexcoords = SCNGeometrySource(data: texcoordsData, semantic: SCNGeometrySourceSemanticTexcoord, vectorCount: texcoords![0].vectorCount, floatComponents: texcoords![0].floatComponents, componentsPerVector: texcoords![0].componentsPerVector, bytesPerComponent: texcoords![0].bytesPerComponent, dataOffset: texcoords![0].dataOffset, dataStride: texcoords![0].dataStride)
let newBoneWeights = SCNGeometrySource(data: boneWeightsData, semantic: SCNGeometrySourceSemanticBoneWeights, vectorCount: boneWeights![0].vectorCount, floatComponents: boneWeights![0].floatComponents, componentsPerVector: boneWeights![0].componentsPerVector, bytesPerComponent: boneWeights![0].bytesPerComponent, dataOffset: boneWeights![0].dataOffset, dataStride: boneWeights![0].dataStride)
let newBoneIndices = SCNGeometrySource(data: boneIndicesData, semantic: SCNGeometrySourceSemanticBoneIndices, vectorCount: boneIndices![0].vectorCount, floatComponents: boneIndices![0].floatComponents, componentsPerVector: boneIndices![0].componentsPerVector, bytesPerComponent: boneIndices![0].bytesPerComponent, dataOffset: boneIndices![0].dataOffset, dataStride: boneIndices![0].dataStride)
let newGeometry = SCNGeometryElement(data: geometryData, primitiveType: geometry!.primitiveType, primitiveCount: geometry!.primitiveCount, bytesPerIndex: geometry!.bytesPerIndex)
let newMeshGeometry = SCNGeometry(sources: [newVerts, newNormals, newTexcoords, newBoneWeights, newBoneIndices], elements: [newGeometry])
newMeshGeometry.firstMaterial = modelMesh?.geometry!.firstMaterial
let newModelMesh = SCNNode(geometry: newMeshGeometry)
let bones = modelMesh?.skinner?.bones
let boneInverseBindTransforms = modelMesh?.skinner?.boneInverseBindTransforms
let skeleton = modelMesh!.skinner!.skeleton!
let baseGeometryBindTransform = modelMesh!.skinner!.baseGeometryBindTransform
newModelMesh.skinner = SCNSkinner(baseGeometry: newMeshGeometry, bones: bones, boneInverseBindTransforms: boneInverseBindTransforms, boneWeights: newBoneWeights, boneIndices: newBoneIndices)
newModelMesh.skinner?.baseGeometryBindTransform = baseGeometryBindTransform
// Before this assignment, newModelMesh.skinner?.skeleton is nil.
newModelMesh.skinner?.skeleton = skeleton
// After, it is still nil... however, skeleton itself is completely valid.
modelMesh?.removeFromParentNode()
newModelMesh.name = "MeshName"
let meshParentNode = modelNode?.childNodeWithName("MeshParentNode", recursively: true)
meshParentNode?.addChildNode(newModelMesh)
This three methods may help you to find the solution:
SCNNode *hero = [SCNScene sceneNamed:#"Hero"].rootNode;
SCNNode *hat = [SCNScene sceneNamed:#"FancyFedora"].rootNode;
hat.skinner.skeleton = hero.skinner.skeleton;
[Export ("initWithFrame:")]
public UIView (System.Drawing.RectangleF frame) : base (NSObjectFlag.Empty)
{
// Invoke the init method now.
var initWithFrame = new Selector ("initWithFrame:").Handle;
if (IsDirectBinding)
Handle = ObjCRuntime.Messaging.IntPtr_objc_msgSend_RectangleF (this.Handle, initWithFrame, frame);
else
Handle = ObjCRuntime.Messaging.IntPtr_objc_msgSendSuper_RectangleF (this.SuperHandle, initWithFrame, frame);
}
See this link as well.
I don't specifically know what causes your code to crash but here is a way of generating a mesh, bones, and skinning that mesh -- all from code. Swift4 and iOS 12.
In the example, there is mesh representing the concatenation of two cylinders, with one of the cylinders branching off at a 45 degree angle, like so:
\
|
The cylinders are just extruded triangles, i.e., radialSegmentCount = 3. (Note that there are 12 vertices, not 9, since the two cylinders aren't really conjoined. The triangles are ordered like this:
v5
^
v3 /__|__\ v1
| | |
| v4 |
v2 |/___\| v0
There are 3 bones, corresponding to the heads and feet of the cylinders, where the middle bone corresponds to the head of the bottom cylinder and simultaneously the foot of the top cylinder. So for example, vertices v0, v2, and v4 correspond to bone0; v1, v3, v5 correspond to bone1, and so forth. That explains why boneIndices (see below) has the value that it does.
The resting positions of the bones corresponds to the resting positions of the cylinders in the geometry (bone2 sprouts off at a 45 degree angle from bone1, just like the cylinder geometry).
With that as context, the following code creates everything needed to skin the geometry:
let vertices = [float3(0.17841241, 0.0, 0.0), float3(0.17841241, 1.0, 0.0), float3(-0.089206174, 0.0, 0.1545097), float3(-0.089206174, 1.0, 0.1545097), float3(-0.089206256, 0.0, -0.15450965), float3(-0.089206256, 1.0, -0.15450965), float3(0.12615661, 1.1261566, 0.0), float3(-0.58094996, 1.8332633, 0.0), float3(-0.063078284, 0.9369217, 0.1545097), float3(-0.7701849, 1.6440284, 0.1545097), float3(-0.063078344, 0.93692166, -0.15450965), float3(-0.77018493, 1.6440284, -0.15450965)]
let indices: [UInt8] = [0, 1, 2, 3, 4, 5, 0, 1, 1, 6, 6, 7, 8, 9, 10, 11, 6, 7]
let geometrySource = SCNGeometrySource(vertices: vertices.map { SCNVector3($0) })
let geometryElement = SCNGeometryElement(indices: indices, primitiveType: .triangleStrip)
let geometry = SCNGeometry(sources: [geometrySource], elements: [geometryElement])
let bone0 = SCNNode()
bone0.simdPosition = float3(0,0,0)
let bone1 = SCNNode()
bone1.simdPosition = float3(0,1,0)
let bone2 = SCNNode()
bone2.simdPosition = float3(0,1,0) + normalize(float3(-1,1,0))
let bones = [bone0, bone1, bone2]
let boneInverseBindTransforms: [NSValue]? = bones.map { NSValue(scnMatrix4: SCNMatrix4Invert($0.transform)) }
var boneWeights: [Float] = vertices.map { _ in 1.0 }
var boneIndices: [UInt8] = [
0, 1, 0, 1, 0, 1,
1, 2, 1, 2, 1, 2,
]
let boneWeightsData = Data(bytesNoCopy: &boneWeights, count: boneWeights.count * MemoryLayout<Float>.size, deallocator: .none)
let boneIndicesData = Data(bytesNoCopy: &boneIndices, count: boneWeights.count * MemoryLayout<UInt8>.size, deallocator: .none)
let boneWeightsGeometrySource = SCNGeometrySource(data: boneWeightsData, semantic: .boneWeights, vectorCount: boneWeights.count, usesFloatComponents: true, componentsPerVector: 1, bytesPerComponent: MemoryLayout<Float>.size, dataOffset: 0, dataStride: MemoryLayout<Float>.size)
let boneIndicesGeometrySource = SCNGeometrySource(data: boneIndicesData, semantic: .boneIndices, vectorCount: boneIndices.count, usesFloatComponents: false, componentsPerVector: 1, bytesPerComponent: MemoryLayout<UInt8>.size, dataOffset: 0, dataStride: MemoryLayout<UInt8>.size)
let skinner = SCNSkinner(baseGeometry: geometry, bones: bones, boneInverseBindTransforms: boneInverseBindTransforms, boneWeights: boneWeightsGeometrySource, boneIndices: boneIndicesGeometrySource)
let node = SCNNode(geometry: geometry)
node.skinner = skinner
Note: In most cases, you should use UInt16 not UInt8.