How to create a 1D Metal Texture? - swift

I want an array of 1-dimensional data (essentially an array of arrays) that is created at run time, so the size of this array is not known at compile time. I want to easily send the array of that data to a kernel shader using setTextures so my kernel can just accept a single argument like texture1d_array which will bind each element texture to a different kernel index automatically, regardless of how many there are.
The question is how to actually create a 1D MTLTexture ? All the MTLTextureDescriptor options seem to focus on 2D or 3D. Is it as simple as creating a 2D texture with the height of 1? Would that then be a 1D texture that the kernel would accept?
I.e.
let textureDescriptor = MTLTextureDescriptor
.texture2DDescriptor(pixelFormat: .r16Uint,
width: length,
height: 1,
mipmapped: false)
If my data is actually just one dimensional (not actually pixel data), is there is an equally convenient way to use an ordinary buffer instead of MTLTexture, with the same flexibility of sending an arbitrarily-sized array of these buffers to the kernel as a single kernel argument?

You can recreate all of those factory methods yourself. I would just use an initializer though unless you run into collisions.
public extension MTLTextureDescriptor {
/// A 1D texture descriptor.
convenience init(
pixelFormat: MTLPixelFormat = .r16Uint,
width: Int
) {
self.init()
textureType = .type1D
self.pixelFormat = pixelFormat
self.width = width
}
}
MTLTextureDescriptor(width: 512)
This is no different than texture2DDescriptor with a height of 1, but it's not as much of a lie. (I.e. yes, a texture can be thought of as infinite-dimensional, with a magnitude of 1 in everything but the few that matter. But we throw out all the dimensions with 1s when saying "how dimensional" something is.)
var descriptor = MTLTextureDescriptor.texture2DDescriptor(
pixelFormat: .r16Uint,
width: 512,
height: 1,
mipmapped: false
)
descriptor.textureType = .type1D
descriptor == MTLTextureDescriptor(width: 512) // true
——
textureType = .typeTextureBuffer takes care of your second question, but why not use textureBufferDescriptor then?

Related

CGContext.init returns nil

I'm making a simple graphics app with moving objects and some animation for Mac OS.
What I'm trying to do is to create a bitmap in memory which is to be rendered to the actual context at the end of a frame (not sure whether CG is a proper way to do this):
let tempBitmap = CGContext.init (data: nil, width: width,
height: height, bitsPerComponent: 8, bytesPerRow: 0, space: CGColorSpaceCreateDeviceRGB(),
bitmapInfo: CGImageAlphaInfo.none.rawValue)
but it always returns nil.
What is the proper way to create an in-memory bitmap?
Your combination is not supported. You can't have RGB color with no alpha. The list of supported pixel formats is in the Quartz 2D Programming Guide.
You can ignore the alpha ("none skip first" or "none skip last") but you can't work directly on 24-bit (3x8) packed pixels.

Copying data between metal textures of different shapes

I am converting two trained Keras models to Metal Performance Shaders. I have to reshape the output of the first graph and use it as input to the second graph. The first graph's output is an MPSImage with "shape" (1,1,8192), and the second graph's input is an MPSImage of "shape" (4,4,512).
I cast graph1's output image.texture as a float16 array, and pass it to the following function to copy the data into "midImage", a 4x4x512 MPSImage:
func reshapeTexture(imageArray:[Float16]) -> MPSImage{
let image = imageArray
image.withUnsafeBufferPointer { ptr in
let width = midImage.texture.width
let height = midImage.texture.height
for slice in 0..<128{
for w in 0..<width{
for h in 0..<height{
let region = MTLRegion(origin: MTLOriginMake(w, h, 0),
size: MTLSizeMake(1, 1, 1))
midImage.texture.replace(region: region, mipmapLevel: 0, slice: slice, withBytes: ptr.baseAddress!.advanced(by: ((slice * 4 * width * height) + ((w + h) * 4)), bytesPerRow: MemoryLayout<Float16>.stride * 4, bytesPerImage: 0)
}
}
}
}
return midImage
}
When I pass midImage to graph2, the output of the graph is a square with 3/4 garbled noise, 1/4 black in the bottom right corner. I think I am not understanding something about the MPSImage slice property for storing extra channels. Thanks!
Metal 2d texture arrays are nearly always stored in a Morton or “Z” ordering of some kind. Certainly MPS always allocates them that way, though I suppose on MacOS there may be a means to make a linear 2D texture array and wrap a MPSImage around it. So, without undue care, direct access of a 2d texture array backing store is going to result in sadness and confusion.
The right way to do this is to write a simple Metal copy kernel. This gives you storage order independence and you don’t have to wait for the command buffer to complete before you can do the operation.
A feature request in Radar might also be warranted. Please also look in the latest macOS / iOS seed to see if Apple recently added a reshape filter for you.

SKPhysicBodies appear to be slightly off-place

Edit: I have been able to solve this problem by using PhysicsEditor to make a polygonal physicsbody instead of using SKPhysicsBody(... alphaThreshold: ... )
--
For some reason I'm having trouble with what I'm assuming is SKPhysicBodies being slightly off-place. While using showPhysics my stationary obstacle nodes appear to have their physicbodies in the correct position, however I am able to trigger collisions without actually touching the obstacle. If you look at the image below it shows where I have found the physicsbodies to be off centre, despite showPhysics telling me otherwise. (Note, the player node travels in the middle of these obstacle nodes).
I also thought it would be worth noting that while the player is travelling, its physicbody appears to travel slightly ahead but I figured this is probably normal.
I also use SKPhysicsBody(... alphaThreshold: ... ) to create the physicbodies from .png images.
Cheers.
Edit: Here's how I create the obstacle physicbodies. Once they're added into the worldNode they are left alone until they need to be removed. Apart from that I don't change them in any way.
let obstacleNode = SKSpriteNode(imageNamed: ... )
obstacleNode.position = CGPoint(x: ..., y: ...)
obstacleNode.name = "obstacle"
obstacleNode.physicsBody = SKPhysicsBody(texture: obstacleNode.texture!, alphaThreshold: 0.1, size: CGSize(width: obstacleNode.texture!.size().width, height: obstacleNode.texture!.size().height))
obstacleNode.physicsBody?.affectedByGravity = false
obstacleNode.physicsBody?.isDynamic = false
obstacleNode.physicsBody!.categoryBitMask = CC.wall.rawValue
obstacleNode.physicsBody!.collisionBitMask = CC.player.rawValue
obstacleNode.physicsBody!.contactTestBitMask = CC.player.rawValue
worldNode.addChild(obstacleNode)
The player node is treated the same way, here is how the player moves.
playerNode.physicsBody?.velocity = CGVector(dx: dx, dy: dy)
I'm assuming you aren't showing the exact images that you used to create your SKSpriteNode and SKPhysicsBody instances. Since you are using a texture to define the shape of your SKPhysicsBody you are likely running up against this:
SKPhysicsBody documentation
If you do not want to create your own shapes, you can use SpriteKit to create a shape for you based on the sprite’s texture.
This is easy and convenient but it can sometimes give unexpected results depending on the textures you are using for your sprite. Perhaps try making an explicit mask or using a simple shape to represent your physics body. There are very good examples and guidelines in that documentation.
I would also follow this pattern when you set the properties on your objects:
// safely unwrap and handle failure if it fails
guard let texture = obstacleNode.texture else { return }
// create the physics body
let physicsBody = SKPhysicsBody(texture: texture,
alphaThreshold: 0.1,
size: CGSize(width: texture.size().width,
height: texture.size().height))
// safely set its properties without the need to unwrap an Optional
physicsBody.affectedByGravity = false
// set the rest of the properties
// set the physics body property on the node
obstacleNode.physicsBody = physicsBody
By setting the properties on a concrete instance of SKPhysicsBody and fully unwrapping and testing Optionals you minimize the chances for a run-time crash that may be difficult to debug.

How to draw concave shape using Stencil test on Metal

This is the first time I'm trying to use Stencil Test but I have seen some examples using OpenGL and a few on Metal but focused on the Depth test instead. I understand the theory behind the Stencil test but I don't know how to set it up on Metal.
I want to draw irregular shapes. For the sake of simplicity lets consider the following 2D polygon:
I want the stencil to pass where the number of overlapping triangles is odd, so that I can reach something like this, where the white area is the area to be ignored:
I'm doing the following steps in the exact order:
Setting the depthStencilPixelFormat:
mtkView.depthStencilPixelFormat = .stencil8
mtkView.clearStencil = .allZeros
Stencil attachment:
let textureDescriptor = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: .stencil8, width: drawable.texture.width, height: drawable.texture.height, mipmapped: true)
textureDescriptor.textureType = .type2D
textureDescriptor.storageMode = .private
textureDescriptor.usage = [.renderTarget, .shaderRead, .shaderWrite]
mainPassStencilTexture = device.makeTexture(descriptor: textureDescriptor)
let stencilAttachment = MTLRenderPassStencilAttachmentDescriptor()
stencilAttachment.texture = mainPassStencilTexture
stencilAttachment.clearStencil = 0
stencilAttachment.loadAction = .clear
stencilAttachment.storeAction = .store
renderPassDescriptor.stencilAttachment = stencilAttachment
Stencil descriptor:
stencilDescriptor.depthCompareFunction = MTLCompareFunction.always
stencilDescriptor.isDepthWriteEnabled = true
stencilDescriptor.frontFaceStencil.stencilCompareFunction = MTLCompareFunction.equal
stencilDescriptor.frontFaceStencil.stencilFailureOperation = MTLStencilOperation.keep
stencilDescriptor.frontFaceStencil.depthFailureOperation = MTLStencilOperation.keep
stencilDescriptor.frontFaceStencil.depthStencilPassOperation = MTLStencilOperation.invert
stencilDescriptor.frontFaceStencil.readMask = 0x1
stencilDescriptor.frontFaceStencil.writeMask = 0x1
stencilDescriptor.backFaceStencil = nil
depthStencilState = device.makeDepthStencilState(descriptor: stencilDescriptor)
and lastly, Im setting the reference value and the stencil state in the main pass:
renderEncoder.setStencilReferenceValue(0x1)
renderEncoder.setDepthStencilState(self.depthStencilState)
Am I missing something because the result I got is just like there is no stencil at all. I can see some differences when changing the settings of the depth test but nothing happens when changing the settings of the stencil ...
Any clue?
Thank you in advance
You're clearing the stencil texture to 0. The reference value is 1. The comparison function is "equal". So, the comparison will fail (1 does not equal 0). The operation for when the stencil comparison fails is "keep", so the stencil texture remains 0. Nothing changes for subsequent fragments.
I would expect that you'd get no rendering, although depending on the order of your vertexes and the front-face winding mode, you may be looking at the back faces of your triangles, in which case the stencil test is effectively disabled. If you don't otherwise care about front vs. back, just set both stencil descriptors the same way.
I think you need to do two passes: first, a stencil-only render; second, the color render governed by the stencil buffer. For the stencil only, you would make the compare function .always. This will toggle (invert) the low bit for each triangle that's drawn over a given pixel, giving you an indication of even or odd count. Because neither the compare function nor the operation involve the reference value, it doesn't matter what it is.
For the second pass, you'd set the compare function to .equal and the reference value to 1. The operations should all be .keep. Also, make sure to set the stencil attachment load action to .load (not .clear).

Layer created is empty even after assigning content to it

I just trying to do a quick prototype following this tutorial:
https://www.youtube.com/watch?v=3zaxrXK7Nac
I'm using my own design for this, the problem is as follows:
When I create a new layer time 8:13 of the posted video and I try to set one of my imported layers as the content of this new layer by using the property image, I get no results.
If I bring this new layer to the screen I can only see black background with transparency, according to the tutorial, it should has the layer I'm assign to it via the image property.
Here is an example of my code:
sketch = Framer.Importer.load("imported/Untitled#2x")
explore_layer = new Layer
width: 750
height: 1334
image: sketch.explore.explore_group
x: screen.width
sketch.Tab_3.on Events.Click, ->
explore_layer.animate
properties:
x: 0
y: 0
curve: "spring(400, 35, 0)"
Here is also a screenshot of my layers
https://gyazo.com/f3fccf7f38813744ea17d259463fabdc
Framer will always import the groups in the selected page of Sketch, and all the groups on that page will transformed into layers that are available on the sketch object directly.
Also: you're now setting the image of a layer to a layer object itself, instead of the image of the sketch layer.
So to get it to work, you need to do a couple of things:
Place all the elements that you want to use on the same page in Sketch
After importing, access those elements directly from the sketch object (so sketch.explore_group instead of sketch.explore.explore_group)
Use the image of the sketch layer, or use the sketch layer itself in your prototype.
Here's an example how it that would look:
sketch = Framer.Importer.load("imported/Untitled#2x")
explore_layer = new Layer
width: 750
height: 1334
image: sketch.explore_group.image
x: screen.width
sketch.Tab_3.on Events.Click, ->
explore_layer.animate
properties:
x: 0
y: 0
curve: "spring(400, 35, 0)"
Or even shorter, and with an updated animation syntax:
sketch = Framer.Importer.load("imported/Untitled#2x")
sketch.explore_group.x = screen.width
sketch.Tab_3.on Events.Click, ->
sketch.explore_group.animate
x: 0
y: 0
options:
curve: Spring(tension:400, friction:35)