Collision is not working - swift

I am trying to make a platorm game. I have made a level in GameScene.sks but I couldn't figure out how to detect collisions in between the sis file and the player, so I decided to use a level node instead. I have the code for turning the nodes into 32 bit numbers, and I have it put into eh code before the self.addchild, but it does not seem to work. Any help?
struct collisionPhysics {
static let alien: UInt32 = 1 << 0
static let level: UInt32 = 1 << 1
}
alien.physicsBody?.categoryBitMask = collisionPhysics.alien
alien.physicsBody?.contactTestBitMask = collisionPhysics.level
alien.physicsBody?.collisionBitMask = collisionPhysics.level
level.physicsBody?.categoryBitMask = collisionPhysics.level
level.physicsBody?.contactTestBitMask = collisionPhysics.alien
level.physicsBody?.collisionBitMask = collisionPhysics.alien
James

Related

How do you create and use an indirectCommandBuffer in Swift for Metal GPU computations?

I am currently working on a project that uses the GPU to do computations on large datasets. Currently I'm investigating the potential of using indirectCommandBuffers to potentially speed up our code, especially since I'm are having trouble with its speed on the M1 processor (interestingly enough, the speed at which our program runs is super fast on AMD Metal GPUs). Another reason why I want to do this is to avoid having to create the exact same Compute Command Encoders 500+ times.
However, I'm having troubles with coding the indirect Command Buffers, and I can't seem to find much documentation for it online, especially in Swift. When I first attempted to do this in the project I'm working on, I found that on the M1 it would just crash when I tried setting the MTLComputePipelineState of the MTLIndirectComputeCommand using .setComputePipelineState(), whereas on AMD chips it would hang when trying to commit and execute the commands in the indirectCommandBuffer, and if it got through everything, it would just return memory points of 0 data.
I've created what hopefully is a minimal reproducible example to try and show what issue I'm having; it just adds two numpy arrays received from C 1000 times. Be aware this is just an example to illustrate the issue. Our goal is improve some finite difference code with Metal.
I'm currently running MacOS 12.4.
Below is the Swift function:
import Metal
import MetalPerformanceShaders
import Accelerate
import Foundation
#_cdecl("metalswift_add")
public func addition(array1: UnsafeMutablePointer<Float>,array2: UnsafeMutablePointer<Float>, length: Int) -> UnsafeMutablePointer<Float> {
var bFound = false
var device : MTLDevice!
device = MTLCreateSystemDefaultDevice()!
let defaultLibrary = try! device.makeLibrary(filepath: "metal.metallib")
let metalswift_addfunction = defaultLibrary.makeFunction(name: "metalswift_add")!
let descriptor = MTLComputePipelineDescriptor()
descriptor.computeFunction = metalswift_addfunction
descriptor.supportIndirectCommandBuffers = true
let computePipelineState = try! device.makeComputePipelineState(descriptor: descriptor, options: .init(), reflection: nil)
var Ref1 : UnsafeMutablePointer<Float> = UnsafeMutablePointer(array1)
var Ref2 : UnsafeMutablePointer<Float> = UnsafeMutablePointer(array2)
var size = length
let SizeBuffer : UnsafeMutableRawPointer = UnsafeMutableRawPointer(&size)
let ll = MemoryLayout<Float>.stride * length
var Buffer1:MTLBuffer! = device.makeBuffer(bytes:Ref1, length: ll, options:[])
var Buffer2:MTLBuffer! = device.makeBuffer(bytes:Ref2, length: ll, options:[])
var MetalBuffer:MTLBuffer! = device.makeBuffer(length: ll, options:[])
let Size:MTLBuffer! = device.makeBuffer(bytes: SizeBuffer, length: MemoryLayout<Int>.size, options: [])
var icbDescriptor:MTLIndirectCommandBufferDescriptor = MTLIndirectCommandBufferDescriptor()
icbDescriptor.commandTypes.insert(MTLIndirectCommandType.concurrentDispatchThreads)
icbDescriptor.inheritBuffers = false
icbDescriptor.inheritPipelineState = false
icbDescriptor.maxKernelBufferBindCount = 4
var indirectCommandBuffer = device.makeIndirectCommandBuffer(descriptor: icbDescriptor, maxCommandCount: 1)!
let icbCommand = indirectCommandBuffer.indirectComputeCommandAt(0)
icbCommand.setComputePipelineState(computePipelineState)
icbCommand.setKernelBuffer(Buffer1, offset: 0, at: 0)
icbCommand.setKernelBuffer(Buffer2, offset: 0, at: 1)
icbCommand.setKernelBuffer(MetalBuffer, offset: 0, at: 2)
icbCommand.setKernelBuffer(Size, offset: 0, at: 3)
icbCommand.concurrentDispatchThreads(MTLSize(width:computePipelineState.threadExecutionWidth, height: 1, depth: 1), threadsPerThreadgroup:MTLSize(width:computePipelineState.maxTotalThreadsPerThreadgroup, height: 1, depth: 1))
icbCommand.setBarrier()
for i in 0..<1000{
print(i)
let commandQueue = device.makeCommandQueue()!
let commandBuffer = commandQueue.makeCommandBuffer()!
let computeCommandEncoder = commandBuffer.makeComputeCommandEncoder()!
computeCommandEncoder.executeCommandsInBuffer(indirectCommandBuffer, range:0..<1)
computeCommandEncoder.endEncoding()
commandBuffer.commit()
commandBuffer.waitUntilCompleted()
}
return(MetalBuffer!.contents().assumingMemoryBound(to: Float.self))
}
This is the Metal Function:
#include <metal_stdlib>
#include <metal_math>
using namespace metal;
#define size (*size_pr)
kernel void metalswift_add(const device float *Buffer1 [[ buffer(0) ]],
const device float *Buffer2[[ buffer(1) ]],
device float *MetalBuffer[[ buffer(2) ]],
const device int *size_pr[[ buffer(3) ]]) {
for (int i=0; i<size; i++){
MetalBuffer[i] = Buffer1[i] + Buffer2[i];
}
}
I had it working without the indirectCommandEncoders, so I believe that it's probably an issue with how I coded the indirectCommandEncoders rather than the Metal function.
If any other information is needed, let me know! Sorry if this is of low quality, this is my first question on stack.
Update: I've updated the code above with some changes that stops the code from crashing at runtime. However, I'm still running into the hanging issue on AMD Metal GPUs, and on the M1 it seems like it only goes through the Metal function once.
You aren't creating the MTLComputePipelineState correctly. To use a pipeline state in an ICB, you need to set supportIndirectCommandBuffers to true in a pipeline state descriptor. Kinda like this:
let metalswift_addfunction = defaultLibrary.makeFunction(name: "metalswift_add")!
let descriptor = MTLComputePipelineDescriptor()
descriptor.computeFunction = metalswift_addfunction
descriptor.supportIndirectCommandBuffers = true
let computePipelineState = try! device.makeComputePipelineState(descriptor: descriptor, options: .init(), reflection: nil)
With that, it should work.
By the way, I recommend running with Shader Validation. It does catch this error. You can enable it in diagnostics scheme settings or by passing an environment variable. You can find more information about shader validation by reading man MetalValidation in Terminal.

How to get CVPixelBuffer handle from UnsafeMutablePointer<UInt8> in Swift?

I got a decoded AVFrame whose format shows 160/Videotoolbox_vld. After googled some articles(here) and viewed the FFmpeg source code(here, and here), the CVBuffer handle should be at AVFrame.data[3]. But the CVBuffer I got seems invalid, any CVPixelBufferGetXXX() function returns 0 or nil.
If I used the av_hwframe_transfer_data() like the ffmpeg's example/hw_decode.c did, the sample can be downloaded from HW to SW buffer. Its AVFrame.format will be nv12. After converted via sws_scale to bgra, the sample can be showed on view with correct content.
I think the VideoToolbox decoded frame is OK. The way I convert AVFrame.data[3] to CVBuffer may be wrong. Just learned accessing c pointer in swift but I am not sure how to read a resource handle(CVBuffer) in a pointer correctly.
The following is how I try to extract CVBuffer from AVFrame
var pFrameOpt: UnsafeMutablePointer<AVFrame>? = av_frame_alloc()
avcodec_receive_frame(..., pFrameOpt)
let data3: UnsafeMutablePointer<UInt8>? = pFrameOpt?.pointee.data.3
data3?.withMemoryRebound(to: CVBuffer.self, capacity: 1) { pCvBuf in
let fW = pFrameOpt!.pointee.width // print 3840
let fH = pFrameOpt!.pointee.height // print 2160
let fFmt = pFrameOpt!.pointee.format // print 160
let cvBuf: CVBuffer = pCvBuf.pointee
let a1 = CVPixelBufferGetDataSize(cvBuf) // print 0
let a2 = CVPixelBufferGetPixelFormatType(cvBuf) // print 0
let a3 = CVPixelBufferGetWidth(cvBuf) // print 0
let a4 = CVPixelBufferGetHeight(cvBuf) // print 0
let a5 = CVPixelBufferGetBytesPerRow(cvBuf) // print 0
let a6 = CVPixelBufferGetBytesPerRowOfPlane(cvBuf, 0) // print 0
let a7 = CVPixelBufferGetWidthOfPlane(cvBuf, 0) // print 0
let a8 = CVPixelBufferGetHeightOfPlane(cvBuf, 0) // print 0
let a9 = CVPixelBufferGetPlaneCount(cvBuf) // print 0
let a10 = CVPixelBufferIsPlanar(cvBuf) // print false
let a11 = CVPixelBufferGetIOSurface(cvBuf) // print nil
let a12 = CVPixelBufferGetBaseAddress(cvBuf) // print nil
let a13 = CVPixelBufferGetBaseAddressOfPlane(cvBuf, 0) // print nil
let b1 = CVImageBufferGetCleanRect(cvBuf) // print 0, 0, 0, 0
let b2 = CVImageBufferGetColorSpace(cvBuf) // print nil
let b3 = CVImageBufferGetDisplaySize(cvBuf) // print 0, 0, 0, 0
let b4 = CVImageBufferGetEncodedSize(cvBuf) // print 0, 0, 0, 0
let b5 = CVImageBufferIsFlipped(cvBuf) // print false
// bad exec
var cvTextureOut: CVMetalTexture?
CVMetalTextureCacheCreateTextureFromImage(kCFAllocatorDefault, ..., cvBuf, nil, .bgra8Unorm, 3840, 2160, 0, ...)
}
CVBuffer is not a fixed size, so rebinding the memory won't work in this way. You need to do this:
Unmanaged<CVBuffer>.fromOpaque(data!).takeRetainedValue()
However, the bottom line is FFmpeg's VideoToolbox backend is not creating a CVPixelBuffer with kCVPixelBufferMetalCompatibilityKey set to true. You won't be able to call CVMetalTextureCacheCreateTextureFromImage(...) successfully in any case.
You could consider using a CVPixelBufferPool with appropriate settings (including kCVPixelBufferMetalCompatibilityKey set to true) and then using VTPixelTransferSession to quickly copy FFmpeg's pixel buffer to your own.
It seems like I wrongly cast void* to CVPixelBuffer* instead of casting void* directly to CVPixelBuffer. I cannot find a swift way to do such c style casting from pointer to numeric value. (Using as! CVPixelBuffer causes crash).
So I create a function for void* to CVPixelBufferRef in C code to do such casting job.
// util.h
#include <CoreVideo/CVPixelBuffer.h>
CVPixelBufferRef CastToCVPixelBuffer(void* p);
// util.c
CVPixelBufferRef CastToCVPixelBuffer(void* p)
{
return (CVPixelBufferRef)p;
}
// BridgeHeader.h
#include "util.h"
Then pass the UnsafeMutablePointer<UInt8> in, get CVPixelBuffer handle out.
let pFrameOpt: UnsafeMutablePointer<AVFrame>? = ...
let data3: UnsafeMutablePointer<UInt8>? = pFrameOpt?.pointee.data.3
let cvBuf: CVBuffer = CastToCVPixelBuffer(data3).takeUnretainedValue()
let width = CVPixelBufferGetWidth(cvBuf) // print 3840
let height = CVPixelBufferGetHeight(cvBuf) // print 2160
Try this
let cvBuf: CVBuffer = Array(UnsafeMutableBufferPointer(start: data3, count: 3))
.withUnsafeBufferPointer {
$0.baseAddress!.withMemoryRebound(to: CVBuffer.self, capacity: 1) { $0 }
}.pointee
or maybe even
let cvBuf: CVBuffer = unsafeBitcast(UnsafeMutableBufferPointer(start: data3, count: 3), to: CVBuffer.self)
/**
#function CVPixelBufferGetBaseAddressOfPlane
#abstract Returns the base address of the plane at planeIndex in the PixelBuffer.
#discussion Retrieving the base address for a PixelBuffer requires that the buffer base address be locked
via a successful call to CVPixelBufferLockBaseAddress. On OSX 10.10 and earlier, or iOS 8 and
earlier, calling this function with a non-planar buffer will have undefined behavior.
#param pixelBuffer Target PixelBuffer.
#param planeIndex Identifying the plane.
#result Base address of the plane, or NULL for non-planar CVPixelBufferRefs.
*/
#available(iOS 4.0, *)
public func CVPixelBufferGetBaseAddressOfPlane(_ pixelBuffer: CVPixelBuffer, _ planeIndex: Int) -> UnsafeMutableRawPointer?
maybe you can try use CVPixelBufferLockBaseAddress before use CVPixelBufferGetBaseAddressOfPlane

"Attemped to add a SKNode which already has a parent:" in Repeat Loop. Any simple work around?

I am pretty Newbie to programming. And I am trying to pile up the random blocks dynamically till it hits the upper frame. But it seems that Swift doesn't let me to do so. Did I miss anything please? Any input are appreciated.
let blocks =[block1,block2,block3,block4,block5,block6,block7,block8,block9,block10,block11,block12]
var block:SKSpriteNode!
let blockX:Double = 0.0
var blockY:Double = -(self.size.height/2)
repeat{
block = blocks.randomBlock()
block.zPosition = 2
block.position = CGPoint(x:blockX, y:blockY)
block.size.height = 50
block.size.width = 50
self.addChild(block)
blockY += 50
} while( block.position.y < self.size.height)
extension Array {
func randomBlock()-> Element {
let randint = Int(arc4random_uniform(UInt32(self.count)))
return self[randint]
}
}
you need to have someway of tracking which blocks have been selected and ensure that they don't get selected again. The method below uses an array to store the indexes of selected blocks and then uses recursion to find a cycle through until an unused match is found.
private var usedBlocks = [Int]()
func randomBlock() -> Int {
guard usedBlocks.count != blocks.count else { return -1 }
let random = Int(arc4random_uniform(UInt32(blocks.count)))
if usedBlocks.contains(random) {
return randomBlock()
}
usedBlocks.append(random)
return random
}
in your loop change your initializer to
let index = randomBlock()
if index > -1 {
block = blocks[index]
block.zPosition = 2
block.position = CGPoint(x:blockX, y:blockY)
}
remember that if you restart the game or start a new level, etc. you must clear all of the objects from usedBlocks
usedBlocks.removeAll()

How does OSReadLittleInt16() translate to Swift?

I want to translate my Obj-C code to Swift.
I got these 3 lines in Obj-C:
NSData* data = ...
unsigned char* bytes = (unsigned char*) data.bytes;
int16_t delta = OSReadLittleInt16(opticalEncoderBytes, 0);
The first two lines translate to:
NSData data = ...
let bytes = UnsafePointer<UInt8>(data.bytes)
The third line is not that easy as I don't know:
Does int16_t simply translate to Int16?
OSReadLittleInt16 is not available in Swift. Do I need to import something?
OSReadLittleInt16 is defined in usr/include/libkern/OSByteOrder.h
Use .bigEndian and .littleEndian
let i :Int16 = 1
print("i: \(i)")
let le :Int16 = i.littleEndian
print("le: \(le)")
let be :Int16 = i.bigEndian
print("be: \(be)")
i: 1
le: 1
be: 256
let data: NSData! = "12345678".dataUsingEncoding(NSUTF8StringEncoding)
let bytes = UnsafePointer<UInt16>(data.bytes)
let ui0 = bytes[0]
let ui1 = bytes[1]
print("ui0: \(String(ui0, radix:16))")
print("ui1: \(String(ui1, radix:16))")
let be0 = bytes[0].bigEndian
let be1 = bytes[1].bigEndian
print("be0: \(String(be0, radix:16))")
print("be1: \(String(be1, radix:16))")
let le0 = bytes[0].littleEndian
let le1 = bytes[1].littleEndian
print("le0: \(String(le0, radix:16))")
print("le1: \(String(le1, radix:16))")
ui0: 3231
ui1: 3433
be0: 3132
be1: 3334
le0: 3231
le1: 3433
Note that the default in iOS is little endian.
Here is an alternative approach: OSReadLittleInt16() is a defined
as a macro in <libkern/OSByteOrder.h> as
#define OSReadLittleInt16(base, byteOffset) _OSReadInt16(base, byteOffset)
The macro is not imported into Swift, but the _OSReadInt16()
function is, so you can do
let delta = UInt16(littleEndian: _OSReadInt16(bytes, 0))
A possible advantage is that this works also on odd offsets, even if the architecture allows only aligned memory access.

SKPhysicsBody avoid collision Swift/SpriteKit

I have 3 SKSpriteNodes in my Scene. One bird, one coin and a border around the scene. I don't want the coin and the bird to collide with each other but withe the border.
I assign a different collisionBitMask and categoryBitMask to every node:
enum CollisionType:UInt32{
case Bird = 1
case Coin = 2
case Border = 3
}
Like so:
bird.physicsBody!.categoryBitMask = CollisionType.Bird.rawValue
bird.physicsBody!.collisionBitMask = CollisionType.Border.rawValue
coin.physicsBody!.categoryBitMask = CollisionType.Coin.rawValue
coin.physicsBody!.collisionBitMask = CollisionType.Border.rawValue
But the coin and the bird still collide with each other.
What am I doing wrong?
The bitmask is on 32 bits. Declaring them like you did corresponds to :
enum CollisionType:UInt32{
case Bird = 1 // 00000000000000000000000000000001
case Coin = 2 // 00000000000000000000000000000010
case Border = 3 // 00000000000000000000000000000011
}
What you want to do is to set your border value to 4. In order to have the following bitmask instead :
enum CollisionType:UInt32{
case Bird = 1 // 00000000000000000000000000000001
case Coin = 2 // 00000000000000000000000000000010
case Border = 4 // 00000000000000000000000000000100
}
Keep in mind that you'll have to follow the same for next bitmask : 8, 16, ... an so on.
Edit :
Also, you might want to use a struct instead of an enum and use another syntax to get it easier (it's not mandatory, just a matter of preference) :
struct PhysicsCategory {
static let None : UInt32 = 0
static let All : UInt32 = UInt32.max
static let Bird : UInt32 = 0b1 // 1
static let Coin : UInt32 = 0b10 // 2
static let Border : UInt32 = 0b100 // 4
}
That you could use like this :
bird.physicsBody!.categoryBitMask = PhysicsCategory.Bird
bird.physicsBody!.collisionBitMask = PhysicsCategory.Border
coin.physicsBody!.categoryBitMask = PhysicsCategory.Coin
coin.physicsBody!.collisionBitMask = PhysicsCategory.Border