Metal/Swift - First element of vertex buffer has one wrong value - swift

I'm just trying to render a red square using metal, and I'm creating a vertex buffer from an array of Vertex structures that look like this:
struct Vertex {
var position: SIMD3<Float>
var color: SIMD4<Float>
}
This is where I'm rendering the square:
var vertices: [Vertex] = [
Vertex(position: [-0.5, -0.5, 0], color: [1, 0, 0, 1]),
Vertex(position: [-0.5, 0.5, 0], color: [1, 0, 0, 1]),
Vertex(position: [0.5, -0.5, 0], color: [1, 0, 0, 1]),
Vertex(position: [0.5, 0.5, 0], color: [1, 0, 0, 1])
]
var vertexBuffer: MTLBuffer?
func render(using renderCommandEncoder: MTLRenderCommandEncoder) {
if self.vertexBuffer == nil {
self.vertexBuffer = self.device.makeBuffer(
bytes: self.vertices,
length: MemoryLayout<Vertex>.stride * self.vertices.count,
options: []
)
}
if let vertexBuffer = self.vertexBuffer {
renderCommandEncoder.setRenderPipelineState(RenderPipelineStates.defaultState)
renderCommandEncoder.setVertexBuffer(vertexBuffer, offset: 0, index: 0)
renderCommandEncoder.drawPrimitives(type: .triangleStrip, vertexStart: 0, vertexCount: vertexBuffer.length / MemoryLayout<Vertex>.stride)
}
}
This is what my render pipeline state looks like:
let library = device.makeDefaultLibrary()!
let vertexShader = library.makeFunction(name: "basicVertexShader")
let fragmentShader = library.makeFunction(name: "basicFragmentShader")
let renderPipelineDescriptor = MTLRenderPipelineDescriptor()
renderPipelineDescriptor.colorAttachments[0].pixelFormat = .bgra8Unorm
renderPipelineDescriptor.vertexFunction = vertexShader
renderPipelineDescriptor.fragmentFunction = fragmentShader
renderPipelineDescriptor.sampleCount = 4
let vertexDescriptor = MTLVertexDescriptor()
vertexDescriptor.attributes[0].format = .float3
vertexDescriptor.attributes[0].bufferIndex = 0 // Position
vertexDescriptor.attributes[0].offset = 0
vertexDescriptor.attributes[1].format = .float4
vertexDescriptor.attributes[1].bufferIndex = 0 // Color
vertexDescriptor.attributes[1].offset = MemoryLayout<SIMD3<Float>>.stride
vertexDescriptor.layouts[0].stride = MemoryLayout<Vertex>.stride
renderPipelineDescriptor.vertexDescriptor = vertexDescriptor
self.defaultState = try! device.makeRenderPipelineState(descriptor: renderPipelineDescriptor)
The vertex and fragment shaders just pass through the position and color. For some reason, when this is rendered the first float of the color of the first vertex comes into the vertex shader as an extremely small value, effectively showing black. It only happens for the red value of the first vertex in the array.
Red square with one black vertex
I can see from debugging the GPU frame that the first vertex has a red color component of 5E-41 (essentially 0).
I have no idea why this is the case, it happens some time when the vertices are added to the vertex buffer. I'm guessing it has something to do with my render pipeline vertex descriptor, but I haven't been able to figure out what's wrong. Thanks for any help!

This is, with high likelihood, a duplicate of this question. I'd encourage you to consider the workarounds there, and also to file your own feedback to raise visibility of this bug. - warrenm
Correct, this appears to be a driver bug of some sorts. I fixed it by adding the cpuCacheModeWriteCombined option to makeBuffer and have filed feedback.
self.vertexBuffer = self.device.makeBuffer(
bytes: self.vertices,
length: MemoryLayout<Vertex>.stride * self.vertices.count,
options: [.cpuCacheModeWriteCombined]
)

Related

How to render individual pixels for one layer of a 3DTexture in a framebuffer?

I have a 4x4x4 3DTexture which I am initializing and showing correctly to color my 4x4x4 grid of vertices (see attached red grid with one white pixel - 0,0,0).
However when I render the 4 layers in a framebuffer (all four at one time using gl.COLOR_ATTACHMENT0 --> gl.COLOR_ATTACHMENT3, only four of the sixteen pixels on a layer are successfully rendered by my fragment shader (to be turned green).
When I only do one layer, with gl.COLOR_ATTACHMENT0, the same 4 pixels show up correctly altered for the 1 layer, and the other 3 layers stay with the original color unchanged. When I change the gl.viewport(0, 0, size, size) (size = 4 in this example), to something else like the whole screen, or different sizes than 4, then different pixels are written, but never more than 4. My goal is to individually specify all 16 pixels of each layer precisely. I'm using colors for now, as a learning experience, but the texture is really for position and velocity information for each vertex for a physics simulation. I'm assuming (faulty assumption?) with 64 points/vertices, that I'm running the vertex shader and the fragment shader 64 times each, coloring one pixel each invocation.
I've removed all but the vital code from the shaders. I've left the javascript unaltered. I suspect my problem is initializing and passing the array of vertex positions incorrectly.
//Set x,y position coordinates to be used to extract data from one plane of our data cube
//remember, z we handle as a 1 layer of our cube which is composed of a stack of x-y planes.
const oneLayerVertices = new Float32Array(size * size * 2);
count = 0;
for (var j = 0; j < (size); j++) {
for (var i = 0; i < (size); i++) {
oneLayerVertices[count] = i;
count++;
oneLayerVertices[count] = j;
count++;
//oneLayerVertices[count] = 0;
//count++;
//oneLayerVertices[count] = 0;
//count++;
}
}
const bufferInfo = twgl.createBufferInfoFromArrays(gl, {
position: {
numComponents: 2,
data: oneLayerVertices,
},
});
And then I'm using the bufferInfo as follows:
gl.useProgram(computeProgramInfo.program);
twgl.setBuffersAndAttributes(gl, computeProgramInfo, bufferInfo);
gl.viewport(0, 0, size, size); //remember size = 4
outFramebuffers.forEach((fb, ndx) => {
gl.bindFramebuffer(gl.FRAMEBUFFER, fb);
gl.drawBuffers([
gl.COLOR_ATTACHMENT0,
gl.COLOR_ATTACHMENT1,
gl.COLOR_ATTACHMENT2,
gl.COLOR_ATTACHMENT3
]);
const baseLayerTexCoord = (ndx * numLayersPerFramebuffer);
console.log("My baseLayerTexCoord is "+baseLayerTexCoord);
twgl.setUniforms(computeProgramInfo, {
baseLayerTexCoord,
u_kernel: [
0, 0, 0,
0, 0, 0,
0, 0, 0,
0, 0, 1,
0, 0, 0,
0, 0, 0,
0, 0, 0,
0, 0, 0,
0, 0, 0,
],
u_position: inPos,
u_velocity: inVel,
loopCounter: loopCounter,
numLayersPerFramebuffer: numLayersPerFramebuffer
});
gl.drawArrays(gl.POINTS, 0, (16));
});
VERTEX SHADER:
calc_vertex:
const compute_vs = `#version 300 es
precision highp float;
in vec4 position;
void main() {
gl_Position = position;
}
`;
FRAGMENT SHADER:
calc_fragment:
const compute_fs = `#version 300 es
precision highp float;
out vec4 ourOutput[4];
void main() {
ourOutput[0] = vec4(0,1,0,1);
ourOutput[1] = vec4(0,1,0,1);
ourOutput[2] = vec4(0,1,0,1);
ourOutput[3] = vec4(0,1,0,1);
}
`;
I’m not sure what you’re trying to do and what you think the positions will do.
You have 2 options for GPU simulation in WebGL2
use transform feedback.
In this case you pass in attributes and generate data in buffers. Effectively you have in attributes and out attributes and generally you only run the vertex shader. To put it another way your varyings, the output of your vertex shader, get written to a buffer. So you have at least 2 sets of buffers, currentState, and nextState and your vertex shader reads attributes from currentState and writes them to nextState
There is an example of writing to buffers via transform feedback here though that example only uses transform feedback at the start to fill buffers once.
use textures attached to framebuffers
in this case, similarly you have 2 textures, currentState, and nextState, You set nextState to be your render target and read from currentState to generate next state.
the difficulty is that you can only render to textures by outputting primitives in the vertex shader. If currentState and nextState are 2D textures that’s trival. Just output a -1.0 to +1.0 quad from the vertex shader and all pixels in nextState will be rendered to.
If you’re using a 3D texture then same thing except you can only render to 4 layers at a time (well, gl.getParameter(gl.MAX_DRAW_BUFFERS)). so you’d have to do something like
for(let layer = 0; layer < numLayers; layer += 4) {
// setup framebuffer to use these 4 layers
gl.drawXXX(...) // draw to 4 layers)
}
or better
// at init time
const fbs = [];
for(let layer = 0; layer < numLayers; layer += 4) {
fbs.push(createFramebufferForThese4Layers(layer);
}
// at draw time
fbs.forEach((fb, ndx) => {;
gl.bindFramebuffer(gl.FRAMEBUFFER, fb);
gl.drawXXX(...) // draw to 4 layers)
});
I’m guessing multiple draw calls is slower than one draw call so another solution is to instead treat a 2D texture as a 3D array and calculate texture coordinates appropriately.
I don’t know which is better. If you’re simulating particles and they only need to look at their own currentState then transform feedback is easier. If need each particle to be able to look at the state of other particles, in other words you need random access to all the data, then your only option is to store the data in textures.
As for positions I don't understand your code. Positions define a primitives, either POINTS, LINES, or TRIANGLES so how does passing integer X, Y values into our vertex shader help you define POINTS, LINES or TRIANGLES?
It looks like you're trying to use POINTS in which case you need to set gl_PointSize to the size of the point you want to draw (1.0) and you need to convert those positions into clip space
gl_Position = vec4((position.xy + 0.5) / resolution, 0, 1);
where resolution is the size of the texture.
But doing it this way will be slow. Much better to just draw a full size (-1 to +1) clip space quad. For every pixel in the destination the fragment shader will be called. gl_FragCoord.xy will be the location of the center of the pixel currently being rendered so first pixel in bottom left corner gl_FragCoord.xy will be (0.5, 0.5). The pixel to the right of that will be (1.5, 0.5). The pixel to the right of that will be (2.5, 0.5). You can use that value to calculate how to access currentState. Assuming 1x1 mapping the easiest way would be
int n = numberOfLayerThatsAttachedToCOLOR_ATTACHMENT0;
vec4 currentStateValueForLayerN = texelFetch(
currentStateTexture, ivec3(gl_FragCoord.xy, n + 0), 0);
vec4 currentStateValueForLayerNPlus1 = texelFetch(
currentStateTexture, ivec3(gl_FragCoord.xy, n + 1), 0);
vec4 currentStateValueForLayerNPlus2 = texelFetch(
currentStateTexture, ivec3(gl_FragCoord.xy, n + 2), 0);
...
vec4 nextStateForLayerN = computeNextStateFromCurrentState(currentStateValueForLayerN);
vec4 nextStateForLayerNPlus1 = computeNextStateFromCurrentState(currentStateValueForLayerNPlus1);
vec4 nextStateForLayerNPlus2 = computeNextStateFromCurrentState(currentStateValueForLayerNPlus2);
...
outColor[0] = nextStateForLayerN;
outColor[1] = nextStateForLayerNPlus1;
outColor[2] = nextStateForLayerNPlus1;
...
I don’t know if you needed this but just to test here’s a simple example that renders a different color to every pixel of a 4x4x4 texture and then displays them.
const pointVS = `
#version 300 es
uniform int size;
uniform highp sampler3D tex;
out vec4 v_color;
void main() {
int x = gl_VertexID % size;
int y = (gl_VertexID / size) % size;
int z = gl_VertexID / (size * size);
v_color = texelFetch(tex, ivec3(x, y, z), 0);
gl_PointSize = 8.0;
vec3 normPos = vec3(x, y, z) / float(size);
gl_Position = vec4(
mix(-0.9, 0.6, normPos.x) + mix(0.0, 0.3, normPos.y),
mix(-0.6, 0.9, normPos.z) + mix(0.0, -0.3, normPos.y),
0,
1);
}
`;
const pointFS = `
#version 300 es
precision highp float;
in vec4 v_color;
out vec4 outColor;
void main() {
outColor = v_color;
}
`;
const rtVS = `
#version 300 es
in vec4 position;
void main() {
gl_Position = position;
}
`;
const rtFS = `
#version 300 es
precision highp float;
uniform vec2 resolution;
out vec4 outColor[4];
void main() {
vec2 xy = gl_FragCoord.xy / resolution;
outColor[0] = vec4(1, 0, xy.x, 1);
outColor[1] = vec4(0.5, xy.yx, 1);
outColor[2] = vec4(xy, 0, 1);
outColor[3] = vec4(1, vec2(1) - xy, 1);
}
`;
function main() {
const gl = document.querySelector('canvas').getContext('webgl2');
if (!gl) {
return alert('need webgl2');
}
const pointProgramInfo = twgl.createProgramInfo(gl, [pointVS, pointFS]);
const rtProgramInfo = twgl.createProgramInfo(gl, [rtVS, rtFS]);
const size = 4;
const numPoints = size * size * size;
const tex = twgl.createTexture(gl, {
target: gl.TEXTURE_3D,
width: size,
height: size,
depth: size,
});
const clipspaceFullSizeQuadBufferInfo = twgl.createBufferInfoFromArrays(gl, {
position: {
data: [
-1, -1,
1, -1,
-1, 1,
-1, 1,
1, -1,
1, 1,
],
numComponents: 2,
},
});
const fb = gl.createFramebuffer();
gl.bindFramebuffer(gl.FRAMEBUFFER, fb);
for (let i = 0; i < 4; ++i) {
gl.framebufferTextureLayer(
gl.FRAMEBUFFER,
gl.COLOR_ATTACHMENT0 + i,
tex,
0, // mip level
i, // layer
);
}
gl.drawBuffers([
gl.COLOR_ATTACHMENT0,
gl.COLOR_ATTACHMENT1,
gl.COLOR_ATTACHMENT2,
gl.COLOR_ATTACHMENT3,
]);
gl.viewport(0, 0, size, size);
gl.useProgram(rtProgramInfo.program);
twgl.setBuffersAndAttributes(
gl,
rtProgramInfo,
clipspaceFullSizeQuadBufferInfo);
twgl.setUniforms(rtProgramInfo, {
resolution: [size, size],
});
twgl.drawBufferInfo(gl, clipspaceFullSizeQuadBufferInfo);
gl.bindFramebuffer(gl.FRAMEBUFFER, null);
gl.viewport(0, 0, gl.canvas.width, gl.canvas.height);
gl.drawBuffers([
gl.BACK,
]);
gl.useProgram(pointProgramInfo.program);
twgl.setUniforms(pointProgramInfo, {
tex,
size,
});
gl.drawArrays(gl.POINTS, 0, numPoints);
}
main();
<canvas></canvas>
<script src="https://twgljs.org/dist/4.x/twgl-full.min.js"></script>

Metal shader not working as expected

If I run the following vertex shader in Metal/Swift I get a nice rectangle on the screen:
vertex Vertex vertexShader(uint k [[ vertex_id ]],
device float2* position [[buffer(1)]]){
Vertex output;
float2 pos = position[k];
output.position = float4(pos,0,1);
return output;
};
//position [0.0, 0.0, 0.5, 0.0, 0.0, 0.5, 0.5, 0.5]
//indexList [0, 1, 2, 2, 1, 3]
Now if I run the following I get a blank screen:
vertex Vertex vertexShader(uint k [[ vertex_id ]],
device float3* position [[buffer(1)]]){
Vertex output;
float3 pos = position[k];
output.position = float4(pos,1);
return output;
};
//position [0.0, 0.0, 0.0, 0.5, 0.0, 0.0, 0.0, 0.5, 0.0, 0.5, 0.5, 0.0]
//indexList [0, 1, 2, 2, 1, 3]
It seems to me these should produce identical results. What am I missing?
How exactly are you filling the buffer associated with index 1 in your app code?
I suspect you're just supplying an array of floats. Well, float3 is not packed. Its layout is not the same as 3 floats. There's padding. Its size is actually the same as float4 or 4 floats.
Probably, the simplest fix is to declare position as a pointer to packed_float3.

What is an example of drawing custom nodes with vertices in swift SceneKit?

This area is hardly documented online and it would be great to see a working Swift 3 example, of say, a custom drawn cube with manual SCNvector3s. There is this in objective-C but not Swift. This might not be a usual form of question but I know it would help many. If there is somewhere I missed, please mention.
The documentation is not very helpful
scngeometrysource, etc.
Thanks
A custom geometry is constructed from a set of vertices and normals.
Vertices
In this context, a vertex is a point where two or more lines intersect. For a cube, the vertices are the corners shown in the following figure
We construct the geometry by building the cube's faces with a set of triangles, two triangles per face. Our first triangle is defined by vertices 0, 2, and 3 as shown in the below figure, and the second triangle is defined by vertices 0, 1, and 2. It is important to note that each triangle has a front and back side. The side of the triangle is determined by the order of the vertices, where the front side is specified in counter-clockwise order. For our cube, the front side will always be the outside of the cube.
If the cube's center is the origin, the six vertices that define one of the cube's face can be defined by
let vertices:[SCNVector3] = [
SCNVector3(x:-1, y:-1, z:1), // 0
SCNVector3(x:1, y:1, z:1), // 2
SCNVector3(x:-1, y:1, z:1) // 3
SCNVector3(x:-1, y:-1, z:1), // 0
SCNVector3(x:1, y:-1, z:1), // 1
SCNVector3(x:1, y:1, z:1) // 2
]
and we create the vertex source by
let vertexSource = SCNGeometrySource(vertices: vertices)
At this point, we have a vertex source that can be use to construct a face of the cube; however, SceneKit doesn't know how the triangle should react to light sources in the scene. To properly reflect light, we need to provide our geometry with a least one normal vector for each vertex.
Normals
A normal is a vector that specifies the orientation of a vertex that affects how light reflects off the corresponding triangle. In this case, the normal vectors for the six vertices of the triangle are the same; they all point in the positive z direction (i.e., x = 0, y = 0, and z = 1); see the red arrows in the below figure.
The normals are defined by
let normals:[SCNVector3] = [
SCNVector3(x:0, y:0, z:1), // 0
SCNVector3(x:0, y:0, z:1), // 2
SCNVector3(x:0, y:0, z:1), // 3
SCNVector3(x:0, y:0, z:1), // 0
SCNVector3(x:0, y:0, z:1), // 1
SCNVector3(x:0, y:0, z:1) // 2
]
and the source is defined by
let normalSource = SCNGeometrySource(normals: normals)
We now have the sources (vertices and normals) needed to construct a limited geometry, i.e., one cube face (two triangles). The final piece is to create an array of indices into the vertex and normal arrays. In this case, the indices are sequential because the vertices are in the order they are used.
var indices:[Int32] = [0, 1, 2, 3, 4, 5]
From the indices, we create an geometry element. The setup is a bit more involved because SCNGeometryElement requires an NSData as a parameter.
let indexData = NSData(bytes: &indices, length: MemoryLayout<Int32>.size * indices.count)
let element = SCNGeometryElement(data: indexData as Data, primitiveType: .triangles, primitiveCount: indices.count, bytesPerIndex: MemoryLayout<Int32>.size)
We can now create the custom geometry with
let geometry = SCNGeometry(sources: [vertexSource, normalSource], elements: [element])
and lastly create a node and assign the custom geometry to its geometry property
let node = SCNNode()
node.geometry = geometry
scene.rootNode.addChildNode(node)
We now extend the vertices and normals to including all of the cube faces:
// The vertices
let v0 = SCNVector3(x:-1, y:-1, z:1)
let v1 = SCNVector3(x:1, y:-1, z:1)
let v2 = SCNVector3(x:1, y:1, z:1)
let v3 = SCNVector3(x:-1, y:1, z:1)
let v4 = SCNVector3(x:-1, y:-1, z:-1)
let v5 = SCNVector3(x:1, y:-1, z:-1)
let v6 = SCNVector3(x:-1, y:1, z:-1)
let v7 = SCNVector3(x:1, y:1, z:-1)
// All the cube faces
let vertices:[SCNVector3] = [
// Front face
v0, v2, v3,
v0, v1, v2,
// Right face
v1, v7, v2,
v1, v5, v7,
// Back
v5, v6, v7,
v5, v4, v6,
// Left
v4, v3, v6,
v4, v0, v3,
// Top
v3, v7, v6,
v3, v2, v7,
// Bottom
v1, v4, v5,
v1, v0, v4
]
let normalsPerFace = 6
let plusX = SCNVector3(x:1, y:0, z:0)
let minusX = SCNVector3(x:-1, y:0, z:0)
let plusZ = SCNVector3(x:0, y:0, z:1)
let minusZ = SCNVector3(x:0, y:0, z:-1)
let plusY = SCNVector3(x:0, y:1, z:0)
let minusY = SCNVector3(x:0, y:-1, z:0)
// Create an array with the direction of each vertex. Each array element is
// repeated 6 times with the map function. The resulting array or arrays
// is then flatten to an array
let normals:[SCNVector3] = [
plusZ,
plusX,
minusZ,
minusX,
plusY,
minusY
].map{[SCNVector3](repeating:$0,count:normalsPerFace)}.flatMap{$0}
// Create an array of indices [0, 1, 2, ..., N-1]
let indices = vertices.enumerated().map{Int32($0.0)}
let vertexSource = SCNGeometrySource(vertices: vertices)
let normalSource = SCNGeometrySource(normals: normals)
let pointer = UnsafeRawPointer(indices)
let indexData = NSData(bytes: pointer, length: MemoryLayout<Int32>.size * indices.count)
let element = SCNGeometryElement(data: indexData as Data, primitiveType: .triangles, primitiveCount: indices.count/3, bytesPerIndex: MemoryLayout<Int32>.size)
let geometry = SCNGeometry(sources: [vertexSource, normalSource], elements: [element])
// Create a node and assign our custom geometry
let node = SCNNode()
node.geometry = geometry
scene.rootNode.addChildNode(node)

SceneKit pass uniform vector to shader modifiers

I'm trying to pass a GLKVector4 to a shader that should receive it as a vec4. I'm using a fragment shader modifier:
material.shaderModifiers = [ SCNShaderModifierEntryPoint.fragment: shaderModifier ]
where shaderModifier is:
// color changes
uniform float colorModifier;
uniform vec4 colorOffset;
vec4 color = _output.color;
color = color + colorOffset;
color = color + vec4(0.0, colorModifier, 0.0, 0.0);
_output.color = color;
(I'm simply adding a color offset) I've tried:
material.setValue(GLKVector4(v: (250.0, 0.0, 0.0, 0.0)), "colorOffset")
which doesn't work (no offset is added and the shader uses the default value that is (0, 0, 0, 0)). Same happens if I replace GLKVector4 by SCNVector4
Following this I've also tried:
let points: [float2] = [float2(250.0), float2(0.0), float2(0.0), float2(0.0)]
material.setValue(NSData(bytes: points, length: points.count * sizeof(float2)), "colorOffset")
However, I can pass a float value to the uniform colorModifier easily by doing:
material.setValue(250.0, forKey: "colorModifier")
and that will increase the green channel as excepted
So you have to use NSValue, that has a convenience initialization for SCNVector4, so:
let v = SCNVector4(x: 250.0, y: 0.0, z: 0.0, w: 0.0)
material.setValue(NSValue(scnVector4: v), "colorOffset")
It'd be too good if SceneKit could handle it's own types directly...

SceneKit – Custom geometry does not show up

I should see 2 yellow triangles, but I see nothing.
class Terrain {
private class func createGeometry () -> SCNGeometry {
let sources = [
SCNGeometrySource(vertices:[
SCNVector3(x: -1.0, y: -1.0, z: 0.0),
SCNVector3(x: -1.0, y: 1.0, z: 0.0),
SCNVector3(x: 1.0, y: 1.0, z: 0.0),
SCNVector3(x: 1.0, y: -1.0, z: 0.0)], count:4),
SCNGeometrySource(normals:[
SCNVector3(x: 0.0, y: 0.0, z: -1.0),
SCNVector3(x: 0.0, y: 0.0, z: -1.0),
SCNVector3(x: 0.0, y: 0.0, z: -1.0),
SCNVector3(x: 0.0, y: 0.0, z: -1.0)], count:4)
]
let elements = [
SCNGeometryElement(indices: [0, 2, 3, 0, 1, 2], primitiveType: .Triangles)
]
let geo = SCNGeometry(sources:sources, elements:elements)
let mat = SCNMaterial()
mat.diffuse.contents = UIColor.yellowColor()
mat.doubleSided = true
geo.materials = [mat]
return geo
}
class func createNode () -> SCNNode {
let node = SCNNode(geometry: createGeometry())
node.name = "Terrain"
node.position = SCNVector3()
return node
}
}
I use it as follows:
let terrain = Terrain.createNode()
sceneView.scene?.rootNode.addChildNode(terrain)
let camera = SCNCamera()
camera.zFar = 10000
self.camera = SCNNode()
self.camera.camera = camera
self.camera.position = SCNVector3(x: -20, y: 15, z: 30)
let constraint = SCNLookAtConstraint(target: terrain)
constraint.gimbalLockEnabled = true
self.camera.constraints = [constraint]
sceneView.scene?.rootNode.addChildNode(self.camera)
I get other nodes with non-custom geometry which I see. What's wrong?
Hal Mueller is quite correct in that the indices involved must be a specified type, but it should be noted that this functionality has changed significantly in recent versions of the Swift language. Notably, SCNGeometryElement(indices:, primitiveType:) now functions perfectly well in Swift 4 and I would advise against using CInt which did not work for me. Instead use one of the standard integer types that conforms to the FixedWidthInteger protocol, i.e. Int32. If you know there's a maximum number of vertices involved in your mesh, use the smallest bit size you can that will encompass all of them.
Example:
let vertices = [
SCNVector3(x: 5, y: 4, z: 0),
SCNVector3(x: -5 , y: 4, z: 0),
SCNVector3(x: -5, y: -5, z: 0),
SCNVector3(x: 5, y: -5, z: 0)
]
let allPrimitives: [Int32] = [0, 1, 2, 0, 2, 3]
let vertexSource = SCNGeometrySource(vertices: vertices)
let element = SCNGeometryElement(indices: allPrimitives, primitiveType: .triangles)
let geometry = SCNGeometry(sources: [vertexSource], elements: [element])
SCNNode(geometry: geometry)
What's Happening Here?
First we create an array of vertices describing points in three-dimensional space. The allPrimitives array describes how those vertices link up. Each element is an index from the vertices array. Since we're using triangles, these should be considered in groups of three, one for each corner. For simplicity's sake I've done a simple flat square here. We then create a geometry source with the semantic type of vertices using the original array of all vertices, and a geometry element using the allPrimitives array, also informing it that they are triangles so it knows to group them in threes. These can then be used to create the SCNGeometry object with which we initialise our SCNNode.
An easy way to think about it is that the vertex source exists only to list all the vertices in the object. The geometry element exists only to describe how those vertices are linked up. It's the SCNGeometry that combines these two objects together to create the final physical representation.
Note: see Ash's answer, which is a much better approach for modern Swift than this one.
Your index array has the wrong size element. It's being inferred as [Int]. You need [CInt].
I broke out your elements setup into:
let indices = [0, 2, 3, 0, 1, 2] // [Int]
print(sizeof(Int)) // 8
print(sizeof(CInt)) // 4
let elements = [
SCNGeometryElement(indices: indices, primitiveType: .Triangles)
]
To get the indices to be packed like the expected C array, declare the type explicitly:
let indices: [CInt] = [0, 2, 3, 0, 1, 2]
Custom SceneKit Geometry in Swift on iOS not working but equivalent Objective C code does goes into more detail, but it's written against Swift 1, so you'll have to do some translation.
SCNGeometryElement(indices:, primitiveType:) doesn't appear to be documented anywhere, although it does appear in the headers.