If I run the following vertex shader in Metal/Swift I get a nice rectangle on the screen:
vertex Vertex vertexShader(uint k [[ vertex_id ]],
device float2* position [[buffer(1)]]){
Vertex output;
float2 pos = position[k];
output.position = float4(pos,0,1);
return output;
};
//position [0.0, 0.0, 0.5, 0.0, 0.0, 0.5, 0.5, 0.5]
//indexList [0, 1, 2, 2, 1, 3]
Now if I run the following I get a blank screen:
vertex Vertex vertexShader(uint k [[ vertex_id ]],
device float3* position [[buffer(1)]]){
Vertex output;
float3 pos = position[k];
output.position = float4(pos,1);
return output;
};
//position [0.0, 0.0, 0.0, 0.5, 0.0, 0.0, 0.0, 0.5, 0.0, 0.5, 0.5, 0.0]
//indexList [0, 1, 2, 2, 1, 3]
It seems to me these should produce identical results. What am I missing?
How exactly are you filling the buffer associated with index 1 in your app code?
I suspect you're just supplying an array of floats. Well, float3 is not packed. Its layout is not the same as 3 floats. There's padding. Its size is actually the same as float4 or 4 floats.
Probably, the simplest fix is to declare position as a pointer to packed_float3.
Related
enter image description here
float3 zaxis = normalize(forward);
float3 xaxis = normalize(cross(up, zaxis));
float3 yaxis = cross(zaxis, xaxis);
return float4x4(
xaxis.x, xaxis.y, xaxis.z, 0,
yaxis.x, yaxis.y, yaxis.z, 0,
zaxis.x, zaxis.y, zaxis.z, 0,
0, 0, 0, 1
);
my code is above
I'm just trying to render a red square using metal, and I'm creating a vertex buffer from an array of Vertex structures that look like this:
struct Vertex {
var position: SIMD3<Float>
var color: SIMD4<Float>
}
This is where I'm rendering the square:
var vertices: [Vertex] = [
Vertex(position: [-0.5, -0.5, 0], color: [1, 0, 0, 1]),
Vertex(position: [-0.5, 0.5, 0], color: [1, 0, 0, 1]),
Vertex(position: [0.5, -0.5, 0], color: [1, 0, 0, 1]),
Vertex(position: [0.5, 0.5, 0], color: [1, 0, 0, 1])
]
var vertexBuffer: MTLBuffer?
func render(using renderCommandEncoder: MTLRenderCommandEncoder) {
if self.vertexBuffer == nil {
self.vertexBuffer = self.device.makeBuffer(
bytes: self.vertices,
length: MemoryLayout<Vertex>.stride * self.vertices.count,
options: []
)
}
if let vertexBuffer = self.vertexBuffer {
renderCommandEncoder.setRenderPipelineState(RenderPipelineStates.defaultState)
renderCommandEncoder.setVertexBuffer(vertexBuffer, offset: 0, index: 0)
renderCommandEncoder.drawPrimitives(type: .triangleStrip, vertexStart: 0, vertexCount: vertexBuffer.length / MemoryLayout<Vertex>.stride)
}
}
This is what my render pipeline state looks like:
let library = device.makeDefaultLibrary()!
let vertexShader = library.makeFunction(name: "basicVertexShader")
let fragmentShader = library.makeFunction(name: "basicFragmentShader")
let renderPipelineDescriptor = MTLRenderPipelineDescriptor()
renderPipelineDescriptor.colorAttachments[0].pixelFormat = .bgra8Unorm
renderPipelineDescriptor.vertexFunction = vertexShader
renderPipelineDescriptor.fragmentFunction = fragmentShader
renderPipelineDescriptor.sampleCount = 4
let vertexDescriptor = MTLVertexDescriptor()
vertexDescriptor.attributes[0].format = .float3
vertexDescriptor.attributes[0].bufferIndex = 0 // Position
vertexDescriptor.attributes[0].offset = 0
vertexDescriptor.attributes[1].format = .float4
vertexDescriptor.attributes[1].bufferIndex = 0 // Color
vertexDescriptor.attributes[1].offset = MemoryLayout<SIMD3<Float>>.stride
vertexDescriptor.layouts[0].stride = MemoryLayout<Vertex>.stride
renderPipelineDescriptor.vertexDescriptor = vertexDescriptor
self.defaultState = try! device.makeRenderPipelineState(descriptor: renderPipelineDescriptor)
The vertex and fragment shaders just pass through the position and color. For some reason, when this is rendered the first float of the color of the first vertex comes into the vertex shader as an extremely small value, effectively showing black. It only happens for the red value of the first vertex in the array.
Red square with one black vertex
I can see from debugging the GPU frame that the first vertex has a red color component of 5E-41 (essentially 0).
I have no idea why this is the case, it happens some time when the vertices are added to the vertex buffer. I'm guessing it has something to do with my render pipeline vertex descriptor, but I haven't been able to figure out what's wrong. Thanks for any help!
This is, with high likelihood, a duplicate of this question. I'd encourage you to consider the workarounds there, and also to file your own feedback to raise visibility of this bug. - warrenm
Correct, this appears to be a driver bug of some sorts. I fixed it by adding the cpuCacheModeWriteCombined option to makeBuffer and have filed feedback.
self.vertexBuffer = self.device.makeBuffer(
bytes: self.vertices,
length: MemoryLayout<Vertex>.stride * self.vertices.count,
options: [.cpuCacheModeWriteCombined]
)
So I need the camera to orbit my data multiple times. I thought this should be quite easy but I could not figure it out. Double clicking the camera sequence in the Animation View allowed me to add another path but it added a default path which was different to the orbit. Manually (and painfully) copying the path parameters over did not work either? Any ideas on how to do this?
I know this is quite some time later, but I was stuck on this same need. I ended up tracing to see how a follow path camera cue worked in python code, and then I stacked a few of those lines in a for loop making sure I updated the KeyTime of the frame. This gave me an animation that orbited the focal point set for as many orbits as I did loops.
https://discourse.paraview.org/t/issues-with-multiple-orbit-laps-in-single-animation/11371
from paraview.simple import *
anim = GetAnimationScene()
renderView1 = GetActiveViewOrCreate("RenderView")
cameraAnimationCue1 = CameraAnimationCue()
#cameraAnimationCue1 = GetCameraTrack(view=rv)
cameraAnimationCue1.Mode = 'Path-based'
cameraAnimationCue1.AnimatedProxy = renderView1
# create a new key frame
n = 3
for i in range(n):
keyFrameN = CameraKeyFrame()
keyFrameN.Position = [-6.6921304299024635, 0.0, 0.0]
keyFrameN.FocalPoint = [1e-20, 0.0, 0.0]
keyFrameN.ViewUp = [0.0, 0.0, 1.0]
keyFrameN.ParallelScale = 1.7320508075688772
keyFrameN.PositionPathPoints = [0.0, -5.0, 0.0, 2.938926261462365, -4.045084971874736, 0.0, 4.755282581475766, -1.545084971874737, 0.0, 4.755282581475766, 1.5450849718747361, 0.0, 2.938926261462365, 4.045084971874735, 0.0, 1.3322676295501878e-15, 4.9999999999999964, 0.0, -2.9389262614623624, 4.045084971874735, 0.0, -4.755282581475763, 1.5450849718747368, 0.0, -4.755282581475763, -1.5450849718747341, 0.0, -2.9389262614623632, -4.045084971874731, 0.0]
keyFrameN.FocalPathPoints = [0.0, 0.0, 0.0]
keyFrameN.ClosedPositionPath = 1
keyFrameN.KeyTime = i/n
cameraAnimationCue1.KeyFrames.append(keyFrameN)
# ending scale
keyFrame9333 = CameraKeyFrame()
keyFrame9333.KeyTime = 1.0
keyFrame9333.Position = [-6.6921304299024635, 0.0, 0.0]
keyFrame9333.FocalPoint = [1e-20, 0.0, 0.0]
keyFrame9333.ViewUp = [0.0, 0.0, 1.0]
keyFrame9333.ParallelScale = 1.7320508075688772
# initialize the animation track
cameraAnimationCue1.KeyFrames.append( keyFrame9333)
anim.Cues.append(cameraAnimationCue1)
I'm trying to pass a GLKVector4 to a shader that should receive it as a vec4. I'm using a fragment shader modifier:
material.shaderModifiers = [ SCNShaderModifierEntryPoint.fragment: shaderModifier ]
where shaderModifier is:
// color changes
uniform float colorModifier;
uniform vec4 colorOffset;
vec4 color = _output.color;
color = color + colorOffset;
color = color + vec4(0.0, colorModifier, 0.0, 0.0);
_output.color = color;
(I'm simply adding a color offset) I've tried:
material.setValue(GLKVector4(v: (250.0, 0.0, 0.0, 0.0)), "colorOffset")
which doesn't work (no offset is added and the shader uses the default value that is (0, 0, 0, 0)). Same happens if I replace GLKVector4 by SCNVector4
Following this I've also tried:
let points: [float2] = [float2(250.0), float2(0.0), float2(0.0), float2(0.0)]
material.setValue(NSData(bytes: points, length: points.count * sizeof(float2)), "colorOffset")
However, I can pass a float value to the uniform colorModifier easily by doing:
material.setValue(250.0, forKey: "colorModifier")
and that will increase the green channel as excepted
So you have to use NSValue, that has a convenience initialization for SCNVector4, so:
let v = SCNVector4(x: 250.0, y: 0.0, z: 0.0, w: 0.0)
material.setValue(NSValue(scnVector4: v), "colorOffset")
It'd be too good if SceneKit could handle it's own types directly...
So I'm making a vertex shader to make a GameObject look like it's shrinking/expanding (pulsating?) continuously.
I am using a normal scale matrix to multiply the position of every vertex, but I want to keep the object appearing centered in the same position. If I could get the transform.position of the gameObject that is being rendered, I know I would be able to keep the center position the same.
So how would I access the gameobject's position in my CG shader?
Or am I approaching this problem incorrectly?
vertexOut vert(vertexIn v)
{
vertexOut o;
o.pos = mul(UNITY_MATRIX_MVP,v.vertex);
o.pos2 = mul(UNITY_MATRIX_MVP,v.vertex);
float scaleVal = sin(_Time.y*10)/8 + 1.0;
float4x4 scaleMat = float4x4(
scaleVal, 0, 0, 0,
0, scaleVal, 0, 0,
0, 0, 1, 0,
0, 0, 0, 1);
o.pos = mul(scaleMat,o.pos);
return o;
}
Simply define a shader property of type Vector. Then you can update this property on every frame by calling SetVector on the material.
Sounds like you just want to multiply v.vertex by your scaleMat. So something like:
vertexOut vert(vertexIn v)
{
vertexOut o;
o.pos2 = mul(UNITY_MATRIX_MVP,v.vertex);
float scaleVal = sin(_Time.y*10)/8 + 1.0;
float4x4 scaleMat = float4x4(
scaleVal, 0, 0, 0,
0, scaleVal, 0, 0,
0, 0, 1, 0,
0, 0, 0, 1);
o.pos = mul(UNITY_MATRIX_MVP, mul(scaleMat, v.vertex));
return o;
}
Of course this might behave differently from what you want depending on how you want your mesh to behave under rotation.
To answer the actual posted question though, you can figure out what the transform's position is directly in the vertex shader by converting the origin to world space:
mul(unity_ObjectToWorld, float4(0,0,0,1)).xyz