CheckSum8 Modulo 256 Swift - swift

I have an array of UInt8 and I want to calculate CheckSum8 Modulo 256.
If sum of bytes is less than 255, checkSum function returns correct value.
e.g
let bytes1 : [UInt8] = [1, 0xa1]
let validCheck = checkSum(data : bytes1) // 162 = 0xa2
let bytes : [UInt8] = [6, 0xB1, 27,0xc5,0xf5,0x9d]
let invalidCheck = checkSum(data : bytes) // 41
Below function returns 41 but expected checksum is 35.
func checkSum(data: [UInt8]) -> UInt8 {
var sum = 0
for i in 0..<data.count {
sum += Int(data[i])
}
let retVal = sum & 0xff
return UInt8(retVal)
}

Your checkSum method is largely right. If you want, you could simplify it to:
func checkSum(_ values: [UInt8]) -> UInt8 {
let result = values.reduce(0) { ($0 + UInt32($1)) & 0xff }
return UInt8(result)
}
You point out a web site that reports the checksum8 for 06B127c5f59d is 35.
The problem is that in your array has 27, not 0x27. If you have hexadecimal values, you always need the 0x prefix for each value in your array literal (or, technically, at least if the value is larger than 9).
So, consider:
let values: [UInt8] = [0x06, 0xB1, 0x27, 0xc5, 0xf5, 0x9d]
let result = checkSum(values)
That’s 53. If you want to see that in hexadecimal (like that site you referred to):
let hex = String(result, radix: 16)
That shows us that the checksum is 0x35 in hexadecimal.

Related

How to convert from 8byte hex to real

I'm currently working on a file converter. I've never done anything using binary file reading before. There are many converters available for this file type (gdsII to text), but none in swift that I can find.
I've gotten all the other data types working (2byte int, 4byte int), but I'm really struggling with the real data type.
From a spec document :
http://www.cnf.cornell.edu/cnf_spie9.html
Real numbers are not represented in IEEE format. A floating point number is made up of three parts: the sign, the exponent, and the mantissa. The value of the number is defined to be (mantissa) (16) (exponent). If "S" is the sign bit, "E" is exponent bits, and "M" are mantissa bits then an 8-byte real number has the format
SEEEEEEE MMMMMMMM MMMMMMMM MMMMMMMM
MMMMMMMM MMMMMMMM MMMMMMMM MMMMMMMM
The exponent is in "excess 64" notation; that is, the 7-bit field shows a number that is 64 greater than the actual exponent. The mantissa is always a positive fraction greater than or equal to 1/16 and less than 1. For an 8-byte real, the mantissa is in bits 8 to 63. The decimal point of the binary mantissa is just to the left of bit 8. Bit 8 represents the value 1/2, bit 9 represents 1/4, and so on.
I've tried implementing something similar to what I've seen in python or Perl but each language has features that swift doesn't have, also the type conversions get very confusing.
This is one method I tried, based on Perl. Doesn't seem to get the right value. Bitwise math is new to me.
var sgn = 1.0
let andSgn = 0x8000000000000000 & bytes8_test
if( andSgn > 0) { sgn = -1.0 }
// var sgn = -1 if 0x8000000000000000 & num else 1
let manta = bytes8_test & 0x00ffffffffffffff
let exp = (bytes8_test >> 56) & 0x7f
let powBase = sgn * Double(manta)
let expPow = (4.0 * (Double(exp) - 64.0) - 56.0)
var testReal = pow( powBase , expPow )
Another I tried:
let bitArrayDecode = decodeBitArray(bitArray: bitArray)
let valueArray = calcValueOfArray(bitArray: bitArrayDecode)
var exponent:Int16
//calculate exponent
if(negative){
exponent = valueArray - 192
} else {
exponent = valueArray - 64
}
//calculate mantessa
var mantissa = 0.0
//sgn = -1 if 0x8000000000000000 & num else 1
//mant = num & 0x00ffffffffffffff
//exp = (num >> 56) & 0x7f
//return math.ldexp(sgn * mant, 4 * (exp - 64) - 56)
for index in 0...7 {
//let mantaByte = bytes8_1st[index]
//mantissa += Double(mantaByte) / pow(256.0, Double(index))
let bit = pow(2.0, Double(7-index))
let scaleBit = pow(2.0, Double( index ))
var mantab = (8.0 * Double( bytes8_1st[1] & UInt8(bit)))/(bit*scaleBit)
mantissa = mantissa + mantab
mantab = (8.0 * Double( bytes8_1st[2] & UInt8(bit)))/(256.0 * bit * scaleBit)
mantissa = mantissa + mantab
mantab = (8.0 * Double( bytes8_1st[3] & UInt8(bit)))/(256.0 * bit * scaleBit)
mantissa = mantissa + mantab
}
let real = mantissa * pow(16.0, Double(exponent))
UPDATE:
The following part seems to work for the exponent. Returns -9 for the data set I'm working with. Which is what I expect.
var exp = Int16((bytes8 >> 56) & 0x7f)
exp = exp - 65 //change from excess 64
print(exp)
var sgnVal = 0x8000000000000000 & bytes8
var sgn = 1.0
if(sgnVal == 1){
sgn = -1.0
}
For the mantissa though I can't get the calculation correct some how.
The data set:
3d 68 db 8b ac 71 0c b4
38 6d f3 7f 67 5e f6 ec
I think it should return 1e-9 for exponent and 0.0001
The closest I've gotten real Double 0.0000000000034907316148746757
var bytes7 = Array<UInt8>()
for (index, by) in data.enumerated(){
if(index < 4) {
bytes7.append(by[0])
bytes7.append(by[1])
}
}
for index in 0...7 {
mantissa += Double(bytes7[index]) / (pow(256.0, Double(index) + 1.0 ))
}
var real = mantissa * pow(16.0, Double(exp));
print(mantissa)
END OF UPDATE.
Also doesn't seem to produce the correct values. This one was based on a C file.
If anyone can help me out with an English explanation of what the spec means, or any pointers on what to do I would really appreciate it.
Thanks!
According to the doc, this code returns the 8-byte Real data as Double.
extension Data {
func readUInt64BE(_ offset: Int) -> UInt64 {
var value: UInt64 = 0
_ = Swift.withUnsafeMutableBytes(of: &value) {bytes in
copyBytes(to: bytes, from: offset..<offset+8)
}
return value.bigEndian
}
func readReal64(_ offset: Int) -> Double {
let bitPattern = readUInt64BE(offset)
let sign: FloatingPointSign = (bitPattern & 0x80000000_00000000) != 0 ? .minus: .plus
let exponent = (Int((bitPattern >> 56) & 0x00000000_0000007F)-64) * 4 - 56
let significand = Double(bitPattern & 0x00FFFFFF_FFFFFFFF)
let result = Double(sign: sign, exponent: exponent, significand: significand)
return result
}
}
Usage:
//Two 8-byte Real data taken from the example in the doc
let data = Data([
//1.0000000000000E-03
0x3e, 0x41, 0x89, 0x37, 0x4b, 0xc6, 0xa7, 0xef,
//1.0000000000000E-09
0x39, 0x44, 0xb8, 0x2f, 0xa0, 0x9b, 0x5a, 0x54,
])
let real1 = data.readReal64(0)
let real2 = data.readReal64(8)
print(real1, real2) //->0.001 1e-09
Another example from "UPDATE":
//0.0001 in "UPDATE"
let data = Data([0x3d, 0x68, 0xdb, 0x8b, 0xac, 0x71, 0x0c, 0xb4, 0x38, 0x6d, 0xf3, 0x7f, 0x67, 0x5e, 0xf6, 0xec])
let real = data.readReal64(0)
print(real) //->0.0001
Please remember that Double has only 52-bit significand (mantissa), so this code loses some significant bits in the original 8-byte Real. I'm not sure that can be an issue or not.

Swift: Byte array to decimal value

In my project, I communicate with a bluetooth device, the bluetooth device must send me a timestamp second, I received in byte:
[2,6,239]
When I convert converted to a string:
let payloadString = payload.map {
String(format: "%02x", $0)
}
Output:
["02", "06","ef"]
When I converted from the website 0206ef = 132847 seconds
How can I directly convert my aray [2,6,239] in second (= 132847 seconds)?
And if it's complicated then translate my array ["02", "06,"ef"] in second (= 132847 seconds)
The payload contains the bytes of the binary representation of the value.
You convert it back to the value by shifting each byte into its corresponding position:
let payload: [UInt8] = [2, 6, 239]
let value = Int(payload[0]) << 16 + Int(payload[1]) << 8 + Int(payload[2])
print(value) // 132847
The important point is to convert the bytes to integers before shifting, otherwise an overflow error would occur. Alternatively,
with multiplication:
let value = (Int(payload[0]) * 256 + Int(payload[1])) * 256 + Int(payload[2])
or
let value = payload.reduce(0) { $0 * 256 + Int($1) }
The last approach works with an arbitrary number of bytes – as long as
the result fits into an Int. For 4...8 bytes you better choose UInt64
to avoid overflow errors:
let value = payload.reduce(0) { $0 * 256 + UInt64($1) }
payloadString string can be reduced to hexStr and then converted to decimal
var payload = [2,6,239];
let payloadString = payload.map {
String(format: "%02x", $0)
}
//let hexStr = payloadString.reduce(""){$0 + $1}
let hexStr = payloadString.joined()
if let value = UInt64(hexStr, radix: 16) {
print(value)//132847
}

Reading Data into a Struct in Swift

I'm trying to read bare Data into a Swift 4 struct using the withUnsafeBytes method. The problem
The network UDP packet has this format:
data: 0102 0A00 0000 0B00 0000
01 : 1 byte : majorVersion (decimal 01)
02 : 1 byte : minorVersion (decimal 02)
0A00 0000 : 4 bytes: applicationHostId (decimal 10)
0B00 0000 : 4 bytes: versionNumber (decimal 11)
Then I have an extension on Data that takes a start and the length of bytes to read
extension Data {
func scanValue<T>(start: Int, length: Int) -> T {
return self.subdata(in: start..<start+length).withUnsafeBytes { $0.pointee }
}
}
This works correctly when reading the values one by one:
// correctly read as decimal "1"
let majorVersion: UInt8 = data.scanValue(start: 0, length: 1)
// correctly read as decimal "2"
let minorVersion: UInt8 = data.scanValue(start: 1, length: 1)
// correctly read as decimal "10"
let applicationHostId: UInt32 = data.scanValue(start: 2, length: 4)
// correctly read as decimal "11"
let versionNumber: UInt32 = data.scanValue(start: 6, length: 4)
Then I created a struct that represents the entire packet as follows
struct XPLBeacon {
var majorVersion: UInt8 // 1 Byte
var minorVersion: UInt8 // 1 Byte
var applicationHostId: UInt32 // 4 Bytes
var versionNumber: UInt32 // 4 Bytes
}
But when I read the data directly into the structure I have some issues:
var beacon: XPLBeacon = data.scanValue(start: 0, length: data.count)
// correctly read as decimal "1"
beacon.majorVersion
// correctly read as decimal "2"
beacon.minorVersion
// not correctly read
beacon.applicationHostId
// not correctly read
beacon.versionNumber
I it supposed to work to parse an entire struct like this?
Reading the entire structure from the data does not work because
the struct members are padded to their natural boundary. The
memory layout of struct XPLBeacon is
A B x x C C C C D D D D
where
offset member
0 A - majorVersion (UInt8)
1 B - minorVersion (UInt8)
2 x x - padding
4 C C C C - applicationHostId (UInt32)
8 D D D D - versionNumber (UInt32)
and the padding is inserted so that the UInt32 members are
aligned to memory addresses which are a multiple of their size. This is
also confirmed by
print(MemoryLayout<XPLBeacon>.size) // 12
(For more information about alignment in Swift, see
Type Layout).
If you read the entire data into the struct then the bytes are assigned
as follows
01 02 0A 00 00 00 0B 00 00 00
A B x x C C C C D D D D
which explains why major/minorVersion are correct, but applicationHostId and versionNumber
are wrong. Reading all members separately from the data is the correct solution.
Since Swift 3 Data conforms to RandomAccessCollection, MutableCollection, RangeReplaceableCollection. So you can simply create a custom initializer to initialise your struct properties as follow:
struct XPLBeacon {
let majorVersion, minorVersion: UInt8 // 1 + 1 = 2 Bytes
let applicationHostId, versionNumber: UInt32 // 4 + 4 = 8 Bytes
init(data: Data) {
self.majorVersion = data[0]
self.minorVersion = data[1]
self.applicationHostId = data
.subdata(in: 2..<6)
.withUnsafeBytes { $0.load(as: UInt32.self) }
self.versionNumber = data
.subdata(in: 6..<10)
.withUnsafeBytes { $0.load(as: UInt32.self) }
}
}
var data = Data([0x01,0x02, 0x0A, 0x00, 0x00, 0x00, 0x0B, 0x00, 0x00,0x00])
print(data as NSData) // "{length = 10, bytes = 0x01020a0000000b000000}\n" <01020a00 00000b00 0000>
let beacon = XPLBeacon(data: data)
beacon.majorVersion // 1
beacon.minorVersion // 2
beacon.applicationHostId // 10
beacon.versionNumber // 11
Following on Leo Dabus answer, I created a slightly more readable constructor:
extension Data {
func object<T>(at index: Index) -> T {
subdata(in: index ..< index.advanced(by: MemoryLayout<T>.size))
.withUnsafeBytes { $0.load(as: T.self) }
}
}
struct XPLBeacon {
var majorVersion: UInt8
var minorVersion: UInt8
var applicationHostId: UInt32
var versionNumber: UInt32
init(data: Data) {
var index = data.startIndex
majorVersion = data.object(at: index)
index += MemoryLayout.size(ofValue: majorVersion)
minorVersion = data.object(at: index)
index += MemoryLayout.size(ofValue: minorVersion)
applicationHostId = data.object(at: index)
index += MemoryLayout.size(ofValue: applicationHostId)
versionNumber = data.object(at: index)
}
}
What is not part of this is of course the checking for the correctness of the data. As other mentioned in comments, this could be done either by having a failable init method or by throwing an Error.

Swift 3 : Negative Int to hexadecimal

Hy everyone,
I need to transform a Int to its hexadecimal value.
Example : -40 => D8
I have a working method for positive (or unsigned) Int but it doesn't work as expected with negatives. Here's my code.
class func encodeHex(data:[Int]) -> String {
let hexadecimal = data.reduce("") { (string , element) in
var append = String(element, radix:16 , uppercase : false)
if append.characters.count == 1 {
append = "0" + append
}
return string + append
}
return hexadecimal
}
If I pass -40 I get -28.
Can anyone help ? Thanks :)
I assume from your existing code that all integers are in the range
-128 ... 127. Then this would work:
func encodeHex(data:[Int]) -> String {
return data.map { String(format: "%02hhX", $0) }.joined()
}
The "%02hhX" format prints the least significant byte of the
given integer in base 16 with 2 digits.
Example:
print(encodeHex(data: [40, -40, 127, -128]))
// 28D87F80
D8 is the last byte of binary representation of -40. The remaining three bytes are all FFs.
If you are looking for a string that represents only the last byte, you can obtain by first converting your number to unsigned 8-bit integer, and then converting it to hex, like this:
let x = UInt8(bitPattern:Int8(data))
let res = String(format:"%02X", x)

How in swift to convert Int16 to two UInt8 Bytes

I have some binary data that encodes a two byte value as a signed integer.
bytes[1] = 255 // 0xFF
bytes[2] = 251 // 0xF1
Decoding
This is fairly easy - I can extract an Int16 value from these bytes with:
Int16(bytes[1]) << 8 | Int16(bytes[2])
Encoding
This is where I'm running into issues. Most of my data spec called for UInt and that is easy but I'm having trouble extracting the two bytes that make up an Int16
let nv : Int16 = -15
UInt8(nv >> 8) // fail
UInt8(nv) // fail
Question
How would I extract the two bytes that make up an Int16 value
You should work with unsigned integers:
let bytes: [UInt8] = [255, 251]
let uInt16Value = UInt16(bytes[0]) << 8 | UInt16(bytes[1])
let uInt8Value0 = uInt16Value >> 8
let uInt8Value1 = UInt8(uInt16Value & 0x00ff)
If you want to convert UInt16 to bit equivalent Int16 then you can do it with specific initializer:
let int16Value: Int16 = -15
let uInt16Value = UInt16(bitPattern: int16Value)
And vice versa:
let uInt16Value: UInt16 = 65000
let int16Value = Int16(bitPattern: uInt16Value)
In your case:
let nv: Int16 = -15
let uNv = UInt16(bitPattern: nv)
UInt8(uNv >> 8)
UInt8(uNv & 0x00ff)
You could use init(truncatingBitPattern: Int16) initializer:
let nv: Int16 = -15
UInt8(truncatingBitPattern: nv >> 8) // -> 255
UInt8(truncatingBitPattern: nv) // -> 241
I would just do this:
let a = UInt8(nv >> 8 & 0x00ff) // 255
let b = UInt8(nv & 0x00ff) // 241
extension Int16 {
var twoBytes : [UInt8] {
let unsignedSelf = UInt16(bitPattern: self)
return [UInt8(truncatingIfNeeded: unsignedSelf >> 8),
UInt8(truncatingIfNeeded: unsignedSelf)]
}
}
var test : Int16 = -15
test.twoBytes // [255, 241]