How to draw concave shape using Stencil test on Metal - swift

This is the first time I'm trying to use Stencil Test but I have seen some examples using OpenGL and a few on Metal but focused on the Depth test instead. I understand the theory behind the Stencil test but I don't know how to set it up on Metal.
I want to draw irregular shapes. For the sake of simplicity lets consider the following 2D polygon:
I want the stencil to pass where the number of overlapping triangles is odd, so that I can reach something like this, where the white area is the area to be ignored:
I'm doing the following steps in the exact order:
Setting the depthStencilPixelFormat:
mtkView.depthStencilPixelFormat = .stencil8
mtkView.clearStencil = .allZeros
Stencil attachment:
let textureDescriptor = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: .stencil8, width: drawable.texture.width, height: drawable.texture.height, mipmapped: true)
textureDescriptor.textureType = .type2D
textureDescriptor.storageMode = .private
textureDescriptor.usage = [.renderTarget, .shaderRead, .shaderWrite]
mainPassStencilTexture = device.makeTexture(descriptor: textureDescriptor)
let stencilAttachment = MTLRenderPassStencilAttachmentDescriptor()
stencilAttachment.texture = mainPassStencilTexture
stencilAttachment.clearStencil = 0
stencilAttachment.loadAction = .clear
stencilAttachment.storeAction = .store
renderPassDescriptor.stencilAttachment = stencilAttachment
Stencil descriptor:
stencilDescriptor.depthCompareFunction = MTLCompareFunction.always
stencilDescriptor.isDepthWriteEnabled = true
stencilDescriptor.frontFaceStencil.stencilCompareFunction = MTLCompareFunction.equal
stencilDescriptor.frontFaceStencil.stencilFailureOperation = MTLStencilOperation.keep
stencilDescriptor.frontFaceStencil.depthFailureOperation = MTLStencilOperation.keep
stencilDescriptor.frontFaceStencil.depthStencilPassOperation = MTLStencilOperation.invert
stencilDescriptor.frontFaceStencil.readMask = 0x1
stencilDescriptor.frontFaceStencil.writeMask = 0x1
stencilDescriptor.backFaceStencil = nil
depthStencilState = device.makeDepthStencilState(descriptor: stencilDescriptor)
and lastly, Im setting the reference value and the stencil state in the main pass:
renderEncoder.setStencilReferenceValue(0x1)
renderEncoder.setDepthStencilState(self.depthStencilState)
Am I missing something because the result I got is just like there is no stencil at all. I can see some differences when changing the settings of the depth test but nothing happens when changing the settings of the stencil ...
Any clue?
Thank you in advance

You're clearing the stencil texture to 0. The reference value is 1. The comparison function is "equal". So, the comparison will fail (1 does not equal 0). The operation for when the stencil comparison fails is "keep", so the stencil texture remains 0. Nothing changes for subsequent fragments.
I would expect that you'd get no rendering, although depending on the order of your vertexes and the front-face winding mode, you may be looking at the back faces of your triangles, in which case the stencil test is effectively disabled. If you don't otherwise care about front vs. back, just set both stencil descriptors the same way.
I think you need to do two passes: first, a stencil-only render; second, the color render governed by the stencil buffer. For the stencil only, you would make the compare function .always. This will toggle (invert) the low bit for each triangle that's drawn over a given pixel, giving you an indication of even or odd count. Because neither the compare function nor the operation involve the reference value, it doesn't matter what it is.
For the second pass, you'd set the compare function to .equal and the reference value to 1. The operations should all be .keep. Also, make sure to set the stencil attachment load action to .load (not .clear).

Related

How to get depth images from the camera in pyBullet

In pyBullet, I have struggled a bit with generating a dataset. What I want to achieve is to get pictures of what the camera is seeing: img = p.getCameraImage(224, 224, renderer=p.ER_BULLET_HARDWARE_OPENGL)
Basically: to get the images that are seen in Synthetic Camera RGB data and Synthetic Camera Depth Data (especially this one), which are the camera windows you can see in the following picture on the left.
p.resetDebugVisualizerCamera(cameraDistance=0.5, cameraYaw=yaw, cameraPitch=pitch, cameraTargetPosition=[center_x, center_y, 0.785])
img = p.getCameraImage(224, 224, renderer=p.ER_BULLET_HARDWARE_OPENGL)
rgbBuffer = img[2]
depthBuffer = img[3]
list_of_rgbs.append(rgbBuffer)
list_of_depths.append(depthBuffer)
rgbim = Image.fromarray(rgbBuffer)
depim = Image.fromarray(depthBuffer)
rgbim.save('test_img/rgbtest'+str(counter)+'.jpg')
depim.save('test_img/depth'+str(counter)+'.tiff')
counter += 1
I already run the following, so I don't know if it is related to the settings. p.configureDebugVisualizer(p.COV_ENABLE_DEPTH_BUFFER_PREVIEW, 1)
I have tried several methods because the depth part is complicated. I don't understand if it needs to be treated separately because of the pixel color information or if I need to work with the project matrixes and view matrixes.
I need to save it as a .tiff because I get some cannot save F to png errors. I tried playing a bit with the bit information but acomplished nothing. In case you asked,
# depthBuffer[depthBuffer > 65535] = 65535
# im_uint16 = np.round(depthBuffer).astype(np.uint16)
# depthBuffer = im_uint16
The following is an example of the the .tiff image
And to end, just to remark that these depth images keep changing (looking at all of them, then to the RGB and passing again to the depth images, shows different images regardless of being the same image. I have never ever seen something like this before.
I thought "I managed to fix this some time ago, might as well post the answer found".
The data structure of img has to be taken into account!
img = p.getCameraImage(224, 224, shadow = False, renderer=p.ER_BULLET_HARDWARE_OPENGL)
rgb_opengl = (np.reshape(img[2], (IMG_SIZE, IMG_SIZE, 4)))
depth_buffer_opengl = np.reshape(img[3], [IMG_SIZE, IMG_SIZE])
depth_opengl = far * near / (far - (far - near) * depth_buffer_opengl)
seg_opengl = np.reshape(img[4], [IMG_SIZE, IMG_SIZE]) * 1. / 255.
rgbim = Image.fromarray(rgb_opengl)
rgbim_no_alpha = rgbim.convert('RGB')
rgbim_no_alpha.save('dataset/'+obj_name+'/'+ obj_name +'_rgb_'+str(counter)+'.jpg')
# plt.imshow(depth_buffer_opengl)
plt.imsave('dataset/'+obj_name+'/'+ obj_name+'_depth_'+str(counter)+'.jpg', depth_buffer_opengl)
# plt.show()
Final Images:

my shader is ignoring my worldspace height

Im VERY new to shaders so bear with me. I have a mesh that I want to put a sand texture on below a worldspace position y of say 10 else it should be a grass texture. Apparantly it seems to be ignoring anything I put in and only selecting the grass texture. Something IS happening because my vert and tris count explodes with this function, compared to if I just return the same texture. I just dont see anything no matter what my sandStart value is
this is in my frag function:
if (input.positionWS.y < _SandStart) {
return tex2D(_MainTex, input.uv)* mainLight.shadowAttenuation;
} else {
return tex2D(_SandTex, input.uv) * mainLight.shadowAttenuation;
}
Is there also a way I can easily debug some of the values?
Please note that the OP figured out that their specific problem wasn't caused by the code in the question, but an error in their geometry function, this answer is only about the question "Is there a way to debug shader values" as this debugging method helped the OP find the problem
Debugging shader code can be quite a challenging task, depending on what it is you need to debug, and there are multiple approaches to it. Personally the approach I like best is using colours.
if we break it down there are three aspects in your code that could be faulty:
the value of input.positionWS.y
the if statement (input.positionWS.y < _SandStart)
Returning your texture return tex2D(_MainTex, input.uv)* mainLight.shadowAttenuation;
Lets walk down the list and test each individually.
checking if input.positionWS.y actually contains a value we expect it to contain. To do this we can set any of the RGB channels to its value, and just straight up returning that.
return float4(input.positionWS.y, 0, 0, 1);
Now if input.positionWS.y isn't a normalized value (a.k.a a value that ranges from 0 to 1) this is almost guaranteed to just return your texture as entirely red. To normalize it we divide the value by its max value, lets take max = 100 for the exmaple.
return float4(input.positionWS.y / 100, 0, 0, 1);
This should now make the texture full red at the top (where input.positionWS.y / 100 would be 1) and black at the bottom (where input.positionWS.y / 100 is zero), and a gradient from black to full red inbetween. (Note that since its a position in world space you may need to move the texture up/down to see the colour shift). If this doesn't happen, for example it always stays black or full red then your issue is most likely the input.positionWS.y.
The if statement. It could be that your statement (input.positionWS.y < _SandStart) always returns either true or false, meaning it'll never split. We can test this quite easily by commenting out the current return texture, and instead just return a flat colour like so:
if(input.positionWS.y < _SandStart)
{
return float4(1,0,0,1);
}
else
{
return float4(0,0,1,1);
}
if we tested the input.positionWS.y to be correct in step 1, and _SandStart is set correctly we should see the texture be divided in parts red (if true) and the other part blue (if false) (again since we're basing off world position we might need to change the material's height a bit to see it). If this division in colours doens't happen then the likely cause is that _SandStart isn't set properly, or to an incorrect value. (assuming this is a property you can inspect its value in the material editor)
if both of above steps yield the expected result then return tex2D(_MainTex, input.uv)* mainLight.shadowAttenuation; is possibly the culprit. To debug this we can return one of the textures without the if statement and shadowAttenuation, see if it applies the texture, and then return the other texture by changing which line is commented.
return tex2D(_MainTex, input.uv);
//return tex2D(_SandTex, input.uv);
If each of these textures gets applied properly seperately then it is unlikely that that was your cause, leaving either the shadowAttenutation (just add the multiplication to the above test) or something different altogether that isn't covered by the code in your question.
bonus round. If you got a shader property you want to debug you can actually do this from C# as well using the material.Get<type> function (the supported types can be found in the docs here, and include the array variants too, as well as both Get and Set). a small example:
Properties
{
_Foo ("Foo", Float) = 2
_Bar ("Bar", Color) = (1,1,1,1)
}
can be debugged from C# using
Material mat = getComponent<Material>();
Debug.LogFormat("_Foo value: {0}", mat.GetFloat("_Foo"); //prints 2
Debug.LogFormat("_Bar value: {0}", mat.GetFloat("_Bar"); //prints (1,1,1,1)

Copying data between metal textures of different shapes

I am converting two trained Keras models to Metal Performance Shaders. I have to reshape the output of the first graph and use it as input to the second graph. The first graph's output is an MPSImage with "shape" (1,1,8192), and the second graph's input is an MPSImage of "shape" (4,4,512).
I cast graph1's output image.texture as a float16 array, and pass it to the following function to copy the data into "midImage", a 4x4x512 MPSImage:
func reshapeTexture(imageArray:[Float16]) -> MPSImage{
let image = imageArray
image.withUnsafeBufferPointer { ptr in
let width = midImage.texture.width
let height = midImage.texture.height
for slice in 0..<128{
for w in 0..<width{
for h in 0..<height{
let region = MTLRegion(origin: MTLOriginMake(w, h, 0),
size: MTLSizeMake(1, 1, 1))
midImage.texture.replace(region: region, mipmapLevel: 0, slice: slice, withBytes: ptr.baseAddress!.advanced(by: ((slice * 4 * width * height) + ((w + h) * 4)), bytesPerRow: MemoryLayout<Float16>.stride * 4, bytesPerImage: 0)
}
}
}
}
return midImage
}
When I pass midImage to graph2, the output of the graph is a square with 3/4 garbled noise, 1/4 black in the bottom right corner. I think I am not understanding something about the MPSImage slice property for storing extra channels. Thanks!
Metal 2d texture arrays are nearly always stored in a Morton or “Z” ordering of some kind. Certainly MPS always allocates them that way, though I suppose on MacOS there may be a means to make a linear 2D texture array and wrap a MPSImage around it. So, without undue care, direct access of a 2d texture array backing store is going to result in sadness and confusion.
The right way to do this is to write a simple Metal copy kernel. This gives you storage order independence and you don’t have to wait for the command buffer to complete before you can do the operation.
A feature request in Radar might also be warranted. Please also look in the latest macOS / iOS seed to see if Apple recently added a reshape filter for you.

SKPhysicBodies appear to be slightly off-place

Edit: I have been able to solve this problem by using PhysicsEditor to make a polygonal physicsbody instead of using SKPhysicsBody(... alphaThreshold: ... )
--
For some reason I'm having trouble with what I'm assuming is SKPhysicBodies being slightly off-place. While using showPhysics my stationary obstacle nodes appear to have their physicbodies in the correct position, however I am able to trigger collisions without actually touching the obstacle. If you look at the image below it shows where I have found the physicsbodies to be off centre, despite showPhysics telling me otherwise. (Note, the player node travels in the middle of these obstacle nodes).
I also thought it would be worth noting that while the player is travelling, its physicbody appears to travel slightly ahead but I figured this is probably normal.
I also use SKPhysicsBody(... alphaThreshold: ... ) to create the physicbodies from .png images.
Cheers.
Edit: Here's how I create the obstacle physicbodies. Once they're added into the worldNode they are left alone until they need to be removed. Apart from that I don't change them in any way.
let obstacleNode = SKSpriteNode(imageNamed: ... )
obstacleNode.position = CGPoint(x: ..., y: ...)
obstacleNode.name = "obstacle"
obstacleNode.physicsBody = SKPhysicsBody(texture: obstacleNode.texture!, alphaThreshold: 0.1, size: CGSize(width: obstacleNode.texture!.size().width, height: obstacleNode.texture!.size().height))
obstacleNode.physicsBody?.affectedByGravity = false
obstacleNode.physicsBody?.isDynamic = false
obstacleNode.physicsBody!.categoryBitMask = CC.wall.rawValue
obstacleNode.physicsBody!.collisionBitMask = CC.player.rawValue
obstacleNode.physicsBody!.contactTestBitMask = CC.player.rawValue
worldNode.addChild(obstacleNode)
The player node is treated the same way, here is how the player moves.
playerNode.physicsBody?.velocity = CGVector(dx: dx, dy: dy)
I'm assuming you aren't showing the exact images that you used to create your SKSpriteNode and SKPhysicsBody instances. Since you are using a texture to define the shape of your SKPhysicsBody you are likely running up against this:
SKPhysicsBody documentation
If you do not want to create your own shapes, you can use SpriteKit to create a shape for you based on the sprite’s texture.
This is easy and convenient but it can sometimes give unexpected results depending on the textures you are using for your sprite. Perhaps try making an explicit mask or using a simple shape to represent your physics body. There are very good examples and guidelines in that documentation.
I would also follow this pattern when you set the properties on your objects:
// safely unwrap and handle failure if it fails
guard let texture = obstacleNode.texture else { return }
// create the physics body
let physicsBody = SKPhysicsBody(texture: texture,
alphaThreshold: 0.1,
size: CGSize(width: texture.size().width,
height: texture.size().height))
// safely set its properties without the need to unwrap an Optional
physicsBody.affectedByGravity = false
// set the rest of the properties
// set the physics body property on the node
obstacleNode.physicsBody = physicsBody
By setting the properties on a concrete instance of SKPhysicsBody and fully unwrapping and testing Optionals you minimize the chances for a run-time crash that may be difficult to debug.

how to remove a box2d Fixture

I have a square barrier that has edges defined at run time based on where the user puts the position and rotation of my barrier.
b2BodyDef barrierBodyDef;
barrierBodyDef.type = b2_staticBody;
barrierBodyDef.position.Set(curBarrier
.position.x/PTM_RATIO, curBarrier.position.y/PTM_RATIO);
barrierBodyDef.userData = curBarrier;
b2Body *barrierBody;
barrierBody = _world->CreateBody(&barrierBodyDef);
b2EdgeShape barrierEdge;
b2FixtureDef barrierShapeDef;
barrierShapeDef.shape = &barrierEdge;
barrierShapeDef.friction = 1.0f;
barrierEdge.Set(b2Vec2((x1)/PTM_RATIO, (y1)/PTM_RATIO),
b2Vec2((x2)/PTM_RATIO, (y2)/PTM_RATIO));
barrierBody->CreateFixture(&barrierShapeDef);
barrierEdge.Set(b2Vec2((x2)/PTM_RATIO, (y2)/PTM_RATIO),
b2Vec2((x3)/PTM_RATIO, (y3)/PTM_RATIO));
barrierBody->CreateFixture(&barrierShapeDef);
barrierEdge.Set(b2Vec2((x3)/PTM_RATIO, (y3)/PTM_RATIO),
b2Vec2((x4)/PTM_RATIO, (y4)/PTM_RATIO));
barrierBody->CreateFixture(&barrierShapeDef);
barrierEdge.Set(b2Vec2((x4)/PTM_RATIO, (y4)/PTM_RATIO),
b2Vec2((x1)/PTM_RATIO, (y1)/PTM_RATIO));
barrierBody->CreateFixture(&barrierShapeDef);
I now want to delete these edges, so that the user can re-position the barrier.
How do I go about removing the edges between point x1,y1, -> x4,y4, so that they are immune to collisions.
I am a bit new to Box2D.
Keep the fixture when creating it (local var here for example, you should use an ivar):
b2Fixture* barrierFixture = barrierBody->CreateFixture(&barrierShapeDef);
And later destroy the fixture:
barrierBody->DestroyFixture(barrierFixture);
barrierFixture = nil;
You can also use the body's GetFixtureList() to iterate over fixtures.
What you can not do is to add or remove shapes from a fixture, or modify the shape's vertices. To remove a point from a body's shape, you'll have to destroy the fixture and replace it with a new one.
It is not necessary to recreate the entire body, in fact that can be problematic since you'll probably want to preserve the body's current state (not just position but also linear and angular velocities and perhaps other things too).