Without debug mode UnsafePointer withMemoryRebound will gives wrong value - swift

Here I am trying to concatenate 5 bytes into the single Integer value, I am getting an issue with UnsafePointer withMemoryRebound method.
when I am debugging and checking logs it will gives the correct value. But when I try without debug, it will give the wrong value.(4 out of 5 times wrong value). I got confuses on this API. Is it correct way I am using?
case 1:
let data = [UInt8](rowData) // rowData is type of Data class
let totalKM_BitsArray = [data[8],data[7],data[6],data[5],data[4]]
self.totalKm = UnsafePointer(totalKM_BitsArray).withMemoryRebound(to:UInt64.self, capacity: 1) {$0.pointee}
case 2:
Below code will work for both Enable or Disable debug mode And gives the correct value.
let byte0 : UInt64 = UInt64(data[4])<<64
let byte1 : UInt64 = UInt64(data[5])<<32
let byte2 : UInt64 = UInt64(data[6])<<16
let byte3 : UInt64 = UInt64(data[7])<<8
let byte4 : UInt64 = UInt64(data[8])
self.totalKm = byte0 | byte1 | byte2 | byte3 | byte4
Please suggest me UnsafePointer way of using? Why will this issue come?
Addtional infomation :
let totalKm : UInt64
let data = [UInt8](rowData) // data contain [100, 200, 28, 155, 0, 0, 0, 26, 244, 0, 0, 0, 45, 69, 0, 0, 0, 4, 246]
let totalKM_BitsArray = [data[8],data[7],data[6],data[5],data[4]] // contain [ 244,26,0,0,0]
self.totalKm = UnsafePointer(totalKM_BitsArray).withMemoryRebound(to:UInt64.self, capacity: 1) {$0.pointee}
// when print log gives correct value, when run on device give wrong 3544649566089386 like this.
self.totalKm = byte0 | byte1 | byte2 | byte3 | byte4
// output is 6900 This is correct as expected

There are a few problems with this approach:
let data = [UInt8](rowData) // rowData is type of Data class
let totalKM_BitsArray = [data[8], data[7], data[6], data[5], data[4]]
self.totalKm = UnsafePointer(totalKM_BitsArray)
.withMemoryRebound(to:UInt64.self, capacity: 1) { $0.pointee }
Dereferencing UnsafePointer(totalKM_BitsArray) is undefined behaviour, as the pointer to totalKM_BitsArray's buffer is only temporarily valid for the duration of the initialiser call (hopefully at some point in the future Swift will warn on such constructs).
You're trying to bind only 5 instances of UInt8 to UInt64, so the remaining 3 instances will be garbage.
You can only withMemoryRebound(_:) between types of the same size and stride; which is not the case for UInt8 and UInt64.
It's dependant on the endianness of your platform; data[8] will be the least significant byte on a little endian platform, but the most significant byte on a big endian platform.
Your implementation with bit shifting avoids all of these problems (and is generally the safer way to go as you don't have to consider things like layout compatibility, alignment, and pointer aliasing).
However, assuming that you just wanted to pad out your data with zeroes for the most significant bytes, with rowData[4] to rowData[8] making up the rest of the less significant bytes, then you'll want your bit-shifting implementation to look like this:
let rowData = Data([
100, 200, 28, 155, 0, 0, 0, 26, 244, 0, 0, 0, 45, 69, 0, 0, 0, 4, 246
])
let byte0 = UInt64(rowData[4]) << 32
let byte1 = UInt64(rowData[5]) << 24
let byte2 = UInt64(rowData[6]) << 16
let byte3 = UInt64(rowData[7]) << 8
let byte4 = UInt64(rowData[8])
let totalKm = byte0 | byte1 | byte2 | byte3 | byte4
print(totalKm) // 6900
or, iteratively:
var totalKm: UInt64 = 0
for byte in rowData[4 ... 8] {
totalKm = (totalKm << 8) | UInt64(byte)
}
print(totalKm) // 6900
or, using reduce(_:_:):
let totalKm = rowData[4 ... 8].reduce(0 as UInt64) { accum, byte in
(accum << 8) | UInt64(byte)
}
print(totalKm) // 6900
We can even abstract this into an extension on Data in order to make it easier to load such fixed width integers:
enum Endianness {
case big, little
}
extension Data {
/// Loads the type `I` from the buffer. If there aren't enough bytes to
/// represent `I`, the most significant bits are padded with zeros.
func load<I : FixedWidthInteger>(
fromByteOffset offset: Int = 0, as type: I.Type, endianness: Endianness = .big
) -> I {
let (wholeBytes, spareBits) = I.bitWidth.quotientAndRemainder(dividingBy: 8)
let bytesToRead = Swift.min(count, spareBits == 0 ? wholeBytes : wholeBytes + 1)
let range = startIndex + offset ..< startIndex + offset + bytesToRead
let bytes: Data
switch endianness {
case .big:
bytes = self[range]
case .little:
bytes = Data(self[range].reversed())
}
return bytes.reduce(0) { accum, byte in
(accum << 8) | I(byte)
}
}
}
We're doing a bit of extra work here in order to we read the right number of bytes, as well as being able to handle both big and little endian. But now that we've written it, we can simply write:
let totalKm = rowData[4 ... 8].load(as: UInt64.self)
print(totalKm) // 6900
Note that so far I've assumed that the Data you're getting is zero-indexed. This is safe for the above examples, but isn't necessarily safe depending on where the data is coming from (as it could be a slice). You should be able to do Data(someUnknownDataValue) in order to get a zero-indexed data value that you can work with, although unfortunately I don't believe there's any documentation that guarantees this.
In order to ensure you're correctly indexing an arbitrary Data value, you can define the following extension in order to perform the correct offsetting in the case where you're dealing with a slice:
extension Data {
subscript(offset offset: Int) -> Element {
get { return self[startIndex + offset] }
set { self[startIndex + offset] = newValue }
}
subscript<R : RangeExpression>(
offset range: R
) -> SubSequence where R.Bound == Index {
get {
let concreteRange = range.relative(to: self)
return self[startIndex + concreteRange.lowerBound ..<
startIndex + concreteRange.upperBound]
}
set {
let concreteRange = range.relative(to: self)
self[startIndex + concreteRange.lowerBound ..<
startIndex + concreteRange.upperBound] = newValue
}
}
}
Which you can use then call as e.g data[offset: 4] or data[offset: 4 ... 8].load(as: UInt64.self).
Finally it's worth noting that while you could probably implement this as a re-interpretation of bits by using Data's withUnsafeBytes(_:) method:
let rowData = Data([
100, 200, 28, 155, 0, 0, 0, 26, 244, 0, 0, 0, 45, 69, 0, 0, 0, 4, 246
])
let kmData = Data([0, 0, 0] + rowData[4 ... 8])
let totalKm = kmData.withUnsafeBytes { buffer in
UInt64(bigEndian: buffer.load(as: UInt64.self))
}
print(totalKm) // 6900
This is relying on Data's buffer being 64-bit aligned, which isn't guaranteed. You'll get a runtime error for attempting to load a misaligned value, for example:
let data = Data([0x01, 0x02, 0x03])
let i = data[1...].withUnsafeBytes { buffer in
buffer.load(as: UInt16.self) // Fatal error: load from misaligned raw pointer
}
By loading individual UInt8 values instead and performing bit shifting, we can avoid such alignment issues (however if/when UnsafeMutableRawPointer supports unaligned loads, this will no longer be an issue).

Related

what causes the iteration of an array to conclude prematurely in metal?

What I'm trying to do
I'm testing out metals capability to work with loops. Since I can't define new constants in metal, I'm passing a uint into the buffer and use it to iterate over an array filled with integers. It looks like this in swift.
let array1: [Int] = [1, 2, 3, 4, 5, 6, 1, 2, 3, 4, 5, 6]
The problem(s)
However when reading the result array buffer in Swift after completing the loop in metal, it seems like not every element has been allocated.
#include <metal_stdlib>
using namespace metal;
kernel void shader(constant int *arr [[ buffer(0) ]],
device int *resultArray [[ buffer(1) ]],
constant uint &iter [[ buffer(2) ]]) // value of 12
{
for (uint i = 0; i < iter; i++){
resultArray[i] = arr[i];
}
}
out
1
2
3
4
5
6
0
0
0
0
0
0
Similarly, using the iterator to set allocate each element of resultArray, yields strange results
for (uint i = 0; i < iter; i++){
resultArray[i] = i;
}
out
4294967296
12884901890
21474836484
30064771078
38654705672
47244640266
0
0
0
0
0
0
Multiplication seems to work
for (uint i = 0; i < iter; i++){
resultArray[i] = arr[i] * i;
}
out
0
4
12
24
40
60
0
0
0
0
0
0
Addition does not
for (uint i = 0; i < iter; i++){
resultArray[i] = arr[i] + i;
}
out
4294967297
12884901892
21474836487
30064771082
38654705677
47244640272
0
0
0
0
0
0
When however, I set iter to a value of for example 24 or higher, it at least iterated over the whole arrays of size 12.
for (uint i = 0; i < iter; i++){ // iter now value of 100
resultArray[i] = arr[i] * iter;
}
100
200
300
400
500
600
100
200
300
400
500
600
What is going on here?
MCVE
yes, it's a lot of code to get a simple loop running in metal, please bare with me
main.swift
import MetalKit
let array1: [Int] = [1, 2, 3, 4, 5, 6, 1, 2, 3, 4, 5, 6]
func gpuProcess(arr1: [Int]) {
let size = arr1.count // value of 12
// GPU we want to use
let device = MTLCreateSystemDefaultDevice()
// Fifo queue for sending commands to the gpu
let commandQueue = device?.makeCommandQueue()
// The library for getting our metal functions
let gpuFunctionLibrary = device?.makeDefaultLibrary()
// Grab gpu function
let additionGPUFunction = gpuFunctionLibrary?.makeFunction(name: "shader")
var additionComputePipelineState: MTLComputePipelineState!
do {
additionComputePipelineState = try device?.makeComputePipelineState(function: additionGPUFunction!)
} catch {
print(error)
}
// Create buffers to be sent to the gpu from our array
let arr1Buff = device?.makeBuffer(bytes: arr1,
length: MemoryLayout<Int>.size * size ,
options: .storageModeShared)
let resultBuff = device?.makeBuffer(length: MemoryLayout<Int>.size * size,
options: .storageModeShared)
// Create the buffer to be sent to the command queue
let commandBuffer = commandQueue?.makeCommandBuffer()
// Create an encoder to set values on the compute function
let commandEncoder = commandBuffer?.makeComputeCommandEncoder()
commandEncoder?.setComputePipelineState(additionComputePipelineState)
// Set the parameters of our gpu function
commandEncoder?.setBuffer(arr1Buff, offset: 0, index: 0)
commandEncoder?.setBuffer(resultBuff, offset: 0, index: 1)
// Set parameters for our iterator
var count = size
commandEncoder?.setBytes(&count, length: MemoryLayout.size(ofValue: count), index: 2)
// Figure out how many threads we need to use for our operation
let threadsPerGrid = MTLSize(width: 1, height: 1, depth: 1)
let maxThreadsPerThreadgroup = additionComputePipelineState.maxTotalThreadsPerThreadgroup // 1024
let threadsPerThreadgroup = MTLSize(width: maxThreadsPerThreadgroup, height: 1, depth: 1)
commandEncoder?.dispatchThreads(threadsPerGrid,
threadsPerThreadgroup: threadsPerThreadgroup)
// Tell encoder that it is done encoding. Now we can send this off to the gpu.
commandEncoder?.endEncoding()
// Push this command to the command queue for processing
commandBuffer?.commit()
// Wait until the gpu function completes before working with any of the data
commandBuffer?.waitUntilCompleted()
// Get the pointer to the beginning of our data
var resultBufferPointer = resultBuff?.contents().bindMemory(to: Int.self,
capacity: MemoryLayout<Int>.size * size)
// Print out all of our new added together array information
for _ in 0..<size {
print("\(Int(resultBufferPointer!.pointee) as Any)")
resultBufferPointer = resultBufferPointer?.advanced(by: 1)
}
}
// Call function
gpuProcess(arr1: array1)
compute.metal
#include <metal_stdlib>
using namespace metal;
kernel void shader(constant int *arr [[ buffer(0) ]],
device int *resultArray [[ buffer(1) ]],
constant uint &iter [[ buffer(2) ]]) // value of 12
{
for (uint i = 0; i < iter; i++){
resultArray[i] = arr[i] * iter;
}
}
You are using 64 bit Int in Swift and 32 bit integers in MSL. Your GPU threads are also overlapping their work. Instead, use Int32 in Swift and make each thread process their own piece of data. Like this
import MetalKit
let array1: [Int32] = [1, 2, 3, 4, 5, 6, 1, 2, 3, 4, 5, 6]
func gpuProcess(arr1: [Int32]) {
let size = arr1.count // value of 12
// GPU we want to use
let device = MTLCreateSystemDefaultDevice()
// Fifo queue for sending commands to the gpu
let commandQueue = device?.makeCommandQueue()
// The library for getting our metal functions
let gpuFunctionLibrary = device?.makeDefaultLibrary()
// Grab gpu function
let additionGPUFunction = gpuFunctionLibrary?.makeFunction(name: "shader")
var additionComputePipelineState: MTLComputePipelineState!
do {
additionComputePipelineState = try device?.makeComputePipelineState(function: additionGPUFunction!)
} catch {
print(error)
}
// Create buffers to be sent to the gpu from our array
let arr1Buff = device?.makeBuffer(bytes: arr1,
length: MemoryLayout<Int32>.stride * size ,
options: .storageModeShared)
let resultBuff = device?.makeBuffer(length: MemoryLayout<Int32>.stride * size,
options: .storageModeShared)
// Create the buffer to be sent to the command queue
let commandBuffer = commandQueue?.makeCommandBuffer()
// Create an encoder to set values on the compute function
let commandEncoder = commandBuffer?.makeComputeCommandEncoder()
commandEncoder?.setComputePipelineState(additionComputePipelineState)
// Set the parameters of our gpu function
commandEncoder?.setBuffer(arr1Buff, offset: 0, index: 0)
commandEncoder?.setBuffer(resultBuff, offset: 0, index: 1)
// Set parameters for our iterator
var count = size
commandEncoder?.setBytes(&count, length: MemoryLayout.size(ofValue: count), index: 2)
// Figure out how many threads we need to use for our operation
let threadsPerGrid = MTLSize(width: 1, height: 1, depth: 1)
let maxThreadsPerThreadgroup = additionComputePipelineState.maxTotalThreadsPerThreadgroup // 1024
let threadsPerThreadgroup = MTLSize(width: maxThreadsPerThreadgroup, height: 1, depth: 1)
commandEncoder?.dispatchThreads(threadsPerGrid,
threadsPerThreadgroup: threadsPerThreadgroup)
// Tell encoder that it is done encoding. Now we can send this off to the gpu.
commandEncoder?.endEncoding()
// Push this command to the command queue for processing
commandBuffer?.commit()
// Wait until the gpu function completes before working with any of the data
commandBuffer?.waitUntilCompleted()
// Get the pointer to the beginning of our data
var resultBufferPointer = resultBuff?.contents().bindMemory(to: Int32.self,
capacity: MemoryLayout<Int32>.stride * size)
// Print out all of our new added together array information
for _ in 0..<size {
print("\(Int32(resultBufferPointer!.pointee) as Any)")
resultBufferPointer = resultBufferPointer?.advanced(by: 1)
}
}
// Call function
gpuProcess(arr1: array1)
Kernel:
#include <metal_stdlib>
using namespace metal;
kernel void shader(constant int *arr [[ buffer(0) ]],
device int *resultArray [[ buffer(1) ]],
constant uint &iter [[ buffer(2) ]],
uint gid [[ thread_position_in_grid ]], // this is thread index in grid, since you have height and depth of a dispatch set to 1 in CPU code, you can use 1D `int` here.
)
{
// Early out if gid is out of array boudns
if(gid >= iter)
{
return;
}
// Each thread processes it's own data
resultArray[gid] = arr[gid] * iter;
}
For more information on how to use Metal for compute refer to developer docs and for the information about attributes such as thread_position_in_grid refer to Metal Shading Language specification.

CheckSum8 Modulo 256 Swift

I have an array of UInt8 and I want to calculate CheckSum8 Modulo 256.
If sum of bytes is less than 255, checkSum function returns correct value.
e.g
let bytes1 : [UInt8] = [1, 0xa1]
let validCheck = checkSum(data : bytes1) // 162 = 0xa2
let bytes : [UInt8] = [6, 0xB1, 27,0xc5,0xf5,0x9d]
let invalidCheck = checkSum(data : bytes) // 41
Below function returns 41 but expected checksum is 35.
func checkSum(data: [UInt8]) -> UInt8 {
var sum = 0
for i in 0..<data.count {
sum += Int(data[i])
}
let retVal = sum & 0xff
return UInt8(retVal)
}
Your checkSum method is largely right. If you want, you could simplify it to:
func checkSum(_ values: [UInt8]) -> UInt8 {
let result = values.reduce(0) { ($0 + UInt32($1)) & 0xff }
return UInt8(result)
}
You point out a web site that reports the checksum8 for 06B127c5f59d is 35.
The problem is that in your array has 27, not 0x27. If you have hexadecimal values, you always need the 0x prefix for each value in your array literal (or, technically, at least if the value is larger than 9).
So, consider:
let values: [UInt8] = [0x06, 0xB1, 0x27, 0xc5, 0xf5, 0x9d]
let result = checkSum(values)
That’s 53. If you want to see that in hexadecimal (like that site you referred to):
let hex = String(result, radix: 16)
That shows us that the checksum is 0x35 in hexadecimal.

Two Ways To Get 4 Bytes of (Swift3) Data Into a UInt32

So, I have a stream of well-formed data coming from some hardware. The stream consists of a bunch of chunks of 8-bit data, some of which are meant to form into 32-bit integers. That's all good. The data moves along and now I want to parcel the sequence up.
The data is actually a block of contiguous bytes, with segments of it mapped to useful data. So, for example, the first byte is a confirmation code, the following four bytes represent a UInt32 of some application-specific meaning, followed by two bytes representing a UInt16, and so on for a couple dozen bytes.
I found two different ways to do that, both of which seem a bit..overwrought. It may just what happens when you get close to the metal.
But — are these two code idioms generally what one should expect to do? Or am I missing something more compact?
// data : Data exists before this code, and has what we're transforming into UInt32
// One Way to get 4 bytes from Data into a UInt32
var y : [UInt8] = [UInt8](repeating:UInt8(0x0), count: 4)
data.copyBytes(to: &y, from: Range(uncheckedBounds: (2,6)))
let u32result = UnsafePointer(y).withMemoryRebound(to: UInt32.self, capacity: 1, {
$0.pointee
})
// u32result contains the 4 bytes from data
// Another Way to get 4 bytes from Data into a UInt32 via NSData
var result : UInt32 = 0
let resultAsNSData : NSData = data.subdata(in: Range(uncheckedBounds: (2,6))) as NSData
resultAsNSData.getBytes(&result, range: NSRange(location: 0, length: 4))
// result contains the 4 bytes from data
Creating UInt32 array from well-formed data object.
Swift 3
// Create sample data
let data = "foo".data(using: .utf8)!
// Using pointers style constructor
let array = data.withUnsafeBytes {
[UInt32](UnsafeBufferPointer(start: $0, count: data.count))
}
Swift 2
// Create sample data
let data = "foo".dataUsingEncoding(NSUTF8StringEncoding)!
// Using pointers style constructor
let array = Array(UnsafeBufferPointer(start: UnsafePointer<UInt32>(data.bytes), count: data.length))
I found two other ways of doing this which is leading me to believe that there are plenty of ways to do it, which is good, I suppose.
Two additional ways are described in some fashion over on Ray Wenderlich
This code dropped into your Xcode playground will reveal these two other idioms.
do {
let count = 1 // number of UInt32s
let stride = MemoryLayout<UInt32>.stride
let alignment = MemoryLayout<UInt32>.alignment
let byteCount = count * stride
var bytes : [UInt8] = [0x0D, 0x0C, 0x0B, 0x0A] // little-endian LSB -> MSB
var data : Data = Data.init(bytes: bytes) // In my situtation, I actually start with an instance of Data, so the [UInt8] above is a conceit.
print("---------------- 1 ------------------")
let placeholder = UnsafeMutableRawPointer.allocate(bytes: byteCount, alignedTo:alignment)
withUnsafeBytes(of: &data, { (bytes) in
for (index, byte) in data.enumerated() {
print("byte[\(index)]->\(String(format: "0x%02x",byte)) data[\(index)]->\(String(format: "0x%02x", data[index])) addr: \(bytes.baseAddress!+index)")
placeholder.storeBytes(of: byte, toByteOffset: index, as: UInt8.self)
}
})
let typedPointer1 = placeholder.bindMemory(to: UInt32.self, capacity: count)
print("u32: \(String(format: "0x%08x", typedPointer1.pointee))")
print("---------------- 2 ------------------")
for (index, byte) in bytes.enumerated() {
placeholder.storeBytes(of: byte, toByteOffset: index, as: UInt8.self)
// print("byte \(index): \(byte)")
print("byte[\(index)]->\(String(format: "0x%02x",byte))")
}
let typedPointer = placeholder.bindMemory(to: UInt32.self, capacity: count)
print(typedPointer.pointee)
let result : UInt32 = typedPointer.pointee
print("u32: \(String(format: "0x%08x", typedPointer.pointee))")
}
With output:
---------------- 1 ------------------
byte[0]->0x0d data[0]->0x0d addr: 0x00007fff57243f68
byte[1]->0x0c data[1]->0x0c addr: 0x00007fff57243f69
byte[2]->0x0b data[2]->0x0b addr: 0x00007fff57243f6a
byte[3]->0x0a data[3]->0x0a addr: 0x00007fff57243f6b
u32: 0x0a0b0c0d
---------------- 2 ------------------
byte[0]->0x0d
byte[1]->0x0c
byte[2]->0x0b
byte[3]->0x0a
168496141
u32: 0x0a0b0c0d
Here's a Gist.
let a = [ 0x00, 0x00, 0x00, 0x0e ]
let b = a[0] << 24 + a[1] << 16 + a[2] << 8 + a[3]
print(b) // will print 14.
Should I describe this operation ?

vDSP_conv occasionally returns NANs

I'm using vDSP_conv to perform autocorrelation. Mostly it works just fine but every so often it's filling the output array with NaNs.
The code:
func corr_test() {
var pass = 0
var x = [Float]()
for i in 0..<2000 {
x.append(Float(i))
}
while true {
print("pass \(pass)")
let corr = autocorr(x)
if corr[1].isNaN {
print("!!!")
}
pass += 1
}
}
func autocorr(a: [Float]) -> [Float] {
let resultLen = a.count * 2 + 1
let padding = [Float].init(count: a.count, repeatedValue: 0.0)
let a_pad = padding + a + padding
var result = [Float].init(count: resultLen, repeatedValue: 0.0)
vDSP_conv(a_pad, 1, a_pad, 1, &result, 1, UInt(resultLen), UInt(a_pad.count))
return result
}
The output:
pass ...
pass 169
pass 170
pass 171
(lldb) p corr
([Float]) $R0 = 4001 values {
[0] = 2.66466637E+9
[1] = NaN
[2] = NaN
[3] = NaN
[4] = NaN
...
I'm not sure what's going on here. I think I'm handling the 0 padding correctly since if I weren't I don't think I'd be getting correct results 99% of the time.
Ideas? Gracias.
Figured it out. The key was this comment from https://developer.apple.com/library/mac/samplecode/vDSPExamples/Listings/DemonstrateConvolution_c.html :
// “The signal length is padded a bit. This length is not actually passed to the vDSP_conv routine; it is the number of elements
// that the signal array must contain. The SignalLength defined below is used to allocate space, and it is the filter length
// rounded up to a multiple of four elements and added to the result length. The extra elements give the vDSP_conv routine
// leeway to perform vector-load instructions, which load multiple elements even if they are not all used. If the caller did not
// guarantee that memory beyond the values used in the signal array were accessible, a memory access violation might result.”
“Padded a bit.” Thanks for being so specific. Anyway here's the final working product:
func autocorr(a: [Float]) -> [Float] {
let filterLen = a.count
let resultLen = filterLen * 2 - 1
let signalLen = ((filterLen + 3) & 0xFFFFFFFC) + resultLen
let padding1 = [Float].init(count: a.count - 1, repeatedValue: 0.0)
let padding2 = [Float].init(count: (signalLen - padding1.count - a.count), repeatedValue: 0.0)
let signal = padding1 + a + padding2
var result = [Float].init(count: resultLen, repeatedValue: 0.0)
vDSP_conv(signal, 1, a, 1, &result, 1, UInt(resultLen), UInt(filterLen))
// Remove the first n-1 values which are just mirrored from the end so that [0] always has the autocorrelation.
result.removeFirst(filterLen - 1)
return result
}
Note that the results here aren't normalized.

Swift - Turn Int to binary representations

I receive an Int from my server which I’d like to explode in to an array of bit masks. So for example, if my server gives me the number 3, we get two values, a binary 1 and a binary 2.
How do I do this in Swift?
You could use:
let number = 3
//radix: 2 is binary, if you wanted hex you could do radix: 16
let str = String(number, radix: 2)
println(str)
prints "11"
let number = 79
//radix: 2 is binary, if you wanted hex you could do radix: 16
let str = String(number, radix: 16)
println(str)
prints "4f"
I am not aware of any nice built-in way, but you could use this:
var i = 3
let a = 0..<8
var b = a.map { Int(i & (1 << $0)) }
// b = [1, 2, 0, 0, 0, 0, 0, 0]
Here is a straightforward implementation:
func intToMasks(var n: Int) -> [Int] {
var masks = [Int]()
var mask = 1
while n > 0 {
if n & mask > 0 {
masks.append(mask)
n -= mask
}
mask <<= 1
}
return masks
}
println(intToMasks(3)) // prints "[1,2]"
println(intToMasks(1000)) // prints "[8,32,64,128,256,512]"
public extension UnsignedInteger {
/// The digits that make up this number.
/// - Parameter radix: The base the result will use.
func digits(radix: Self = 10) -> [Self] {
sequence(state: self) { quotient in
guard quotient > 0
else { return nil }
let division = quotient.quotientAndRemainder(dividingBy: radix)
quotient = division.quotient
return division.remainder
}
.reversed()
}
}
let digits = (6 as UInt).digits(radix: 0b10) // [1, 1, 0]
digits.reversed().enumerated().map { $1 << $0 } // [0, 2, 4]
Reverse the result too, if you need it.