Swift sign extension with variable number of bits - swift

I need to sign-extend an 8bit value to 12 bits. In C, I can do it this way. I read Apple's BinaryInteger protocol documentation, but it didn't explain sign extending to a variable number of bits (and i'm also pretty new at Swift). How can I do this in Swift, assuming val is UInt8 and numbits is 12?
#define MASKBITS(numbits) ((1 << numbits) - 1)
#define SIGNEXTEND_TO_16(val, numbits) \
( \
(int16_t)((val & MASKBITS(numbits)) | ( \
(val & (1 << (numbits-1))) ? ~MASKBITS(numbits) : 0) \
))

You can use Int8(bitPattern:) to convert the given unsigned
value to a signed value with the same binary representation,
then sign extend by converting to Int16, make unsigned again, and finally truncate
to the given number of bits:
func signExtend(val: UInt8, numBits: Int) -> UInt16 {
// Sign extend to unsigned 16-bit:
var extended = UInt16(bitPattern: Int16(Int8(bitPattern: val)))
// Truncate to given number of bits:
if numBits < 16 {
extended = extended & ((1 << numBits) - 1)
}
return extended
}
Example:
for i in 1...16 {
let x = signExtend(val: 200, numBits: i)
print(String(format: "%2d %04X", i, x))
}
Output:
1 0000
2 0000
3 0000
4 0008
5 0008
6 0008
7 0048
8 00C8
9 01C8
10 03C8
11 07C8
12 0FC8
13 1FC8
14 3FC8
15 7FC8
16 FFC8

I had the same question in the context of bitstream parsing. I needed code to parse n bit two's complement values into Int32. Here is my solution that works without any condition:
extension UInt32 {
func signExtension(n: Int) -> Int32 {
let signed = Int32.init(bitPattern: self << (32 - n))
let result = signed >> (32 - n)
return result
}
}
And a unit test function showing how to use that code:
func testSignExtension_N_2_3() {
let unsignedValue: UInt32 = 0b110
let signedValue: Int32 = unsignedValue.signExtension(n: 3)
XCTAssertEqual(signedValue, -2)
}

Related

How to get the binary inverse of a number in Swift?

If we have a given number, say 9 (binary representation is 1001). How can we most efficiently get it's inverse 6 (binary representation is 0110)? i.e replacing 0 with 1 and 1 with 0.
I have written a code of order O(1) complexity? But can there be a better way? Does Swift provide an elegant way of handling this?
Note negate function ~9 results in -10. This is not what I am seeking.
func inverse(of givenNumber: Int) -> Int // eg. 9
{
let binaryRepresentation = String(givenNumber, radix: 2) // "1001"
let binaryRepresentationLength = binaryRepresentation.count // 4
let maxValueInLength = (1 << binaryRepresentationLength) - 1 // 15, i.e., 1111
let answer = givenNumber ^ maxValueInLength // 6, i.e., 0110
return answer
}
Edit 1: givenNumber > 0
For positive numbers you can use the following:
func intInverse<T: FixedWidthInteger>(of givenNumber: T) -> T
{
assert(!T.isSigned || givenNumber & (T(1) << (givenNumber.bitWidth - 1)) == 0)
let binaryRepresentationLength = givenNumber.bitWidth - givenNumber.leadingZeroBitCount
let maxValueInLength = givenNumber.leadingZeroBitCount > 0 ? (~(~T(0) << binaryRepresentationLength)) : ~0
let answer = givenNumber ^ maxValueInLength
return answer
}
Which is identical to your algorithm but doesn't require stringifying the number. It doesn't work for negative numbers, but then neither does your algorithm because your algorithm sticks a - on the front of the number.
Probably the easiest way to extend this to cover negative numbers is to invert all the bits to get the binaryRepresentationLength
EDIT
I changed the way the exclusive or mask is created because the old one crashed for unsigned values with the top bit set and for signed values with the second highest bit set.
The code becomes much simpler using the property binade of a floating-point value.
func inverse(of givenNumber: Int) -> Int // eg. 9
{
let maxValueInLength = Int((Double(givenNumber).binade * 2) - 1) // 15, i.e., 1111
let answer = givenNumber ^ maxValueInLength // 6, i.e., 0110
return answer
}

How to handle answer of long long integers in swift [duplicate]

This question already has answers here:
Modulus power of big numbers
(4 answers)
Closed 4 years ago.
I have to calculate power of two long integers in swift.
Swift gives an error of NaN (not a number) and fails to answer.
e.g
pow(2907,1177)
The main process id to calculate power and get remainder (a^b % n) where a= 2907, b= 1177, n= 1211
Any guidelines how to solve it?
You will have to use either 1. an external framework or 2. do it by yourself.
1. External Framework:
I think you can try : https://github.com/mkrd/Swift-Big-Integer
let a = BInt(2907)
let b = 1177
let n = BInt(1211)
let result = (a ** b) % n
print(result) // prints 331
Note: Cocoapods import failed so I just imported this file for it to work: https://github.com/mkrd/Swift-Big-Integer/tree/master/Sources
2. DIY:
Using the answer of Modulus power of big numbers
func powerMod(base: Int, exponent: Int, modulus: Int) -> Int {
guard base > 0 && exponent >= 0 && modulus > 0
else { return -1 }
var base = base
var exponent = exponent
var result = 1
while exponent > 0 {
if exponent % 2 == 1 {
result = (result * base) % modulus
}
base = (base * base) % modulus
exponent = exponent / 2
}
return result
}
let result = powerMod(base: 2907, exponent: 1177, modulus: 1211)
print(result) // prints 331
3. Bonus: Using the same as 2. but with custom ternary operator thanks to http://natecook.com/blog/2014/10/ternary-operators-in-swift/
precedencegroup ModularityLeft {
higherThan: ComparisonPrecedence
lowerThan: AdditionPrecedence
}
precedencegroup ModularityRight {
higherThan: ModularityLeft
lowerThan: AdditionPrecedence
}
infix operator *%* : ModularityLeft
infix operator %*% : ModularityRight
func %*%(exponent: Int, modulus: Int) -> (Int) -> Int {
return { base in
guard base > 0 && exponent >= 0 && modulus > 0
else { return -1 }
var base = base
var exponent = exponent
var result = 1
while exponent > 0 {
if exponent % 2 == 1 {
result = (result * base) % modulus
}
base = (base * base) % modulus
exponent = exponent / 2
}
return result
}
}
func *%*(lhs: Int, rhs: (Int) -> Int) -> Int {
return rhs(lhs)
}
And then you can just call:
let result = 2907 *%* 1177 %*% 1211
Additional information:
Just for information in binary 2907^1177 takes 13542bits...
https://www.wolframalpha.com/input/?i=2907%5E1177+in+binary
It takes a 4kb string to store it in base 10 : https://www.wolframalpha.com/input/?i=2907%5E1177

Reading Data into a Struct in Swift

I'm trying to read bare Data into a Swift 4 struct using the withUnsafeBytes method. The problem
The network UDP packet has this format:
data: 0102 0A00 0000 0B00 0000
01 : 1 byte : majorVersion (decimal 01)
02 : 1 byte : minorVersion (decimal 02)
0A00 0000 : 4 bytes: applicationHostId (decimal 10)
0B00 0000 : 4 bytes: versionNumber (decimal 11)
Then I have an extension on Data that takes a start and the length of bytes to read
extension Data {
func scanValue<T>(start: Int, length: Int) -> T {
return self.subdata(in: start..<start+length).withUnsafeBytes { $0.pointee }
}
}
This works correctly when reading the values one by one:
// correctly read as decimal "1"
let majorVersion: UInt8 = data.scanValue(start: 0, length: 1)
// correctly read as decimal "2"
let minorVersion: UInt8 = data.scanValue(start: 1, length: 1)
// correctly read as decimal "10"
let applicationHostId: UInt32 = data.scanValue(start: 2, length: 4)
// correctly read as decimal "11"
let versionNumber: UInt32 = data.scanValue(start: 6, length: 4)
Then I created a struct that represents the entire packet as follows
struct XPLBeacon {
var majorVersion: UInt8 // 1 Byte
var minorVersion: UInt8 // 1 Byte
var applicationHostId: UInt32 // 4 Bytes
var versionNumber: UInt32 // 4 Bytes
}
But when I read the data directly into the structure I have some issues:
var beacon: XPLBeacon = data.scanValue(start: 0, length: data.count)
// correctly read as decimal "1"
beacon.majorVersion
// correctly read as decimal "2"
beacon.minorVersion
// not correctly read
beacon.applicationHostId
// not correctly read
beacon.versionNumber
I it supposed to work to parse an entire struct like this?
Reading the entire structure from the data does not work because
the struct members are padded to their natural boundary. The
memory layout of struct XPLBeacon is
A B x x C C C C D D D D
where
offset member
0 A - majorVersion (UInt8)
1 B - minorVersion (UInt8)
2 x x - padding
4 C C C C - applicationHostId (UInt32)
8 D D D D - versionNumber (UInt32)
and the padding is inserted so that the UInt32 members are
aligned to memory addresses which are a multiple of their size. This is
also confirmed by
print(MemoryLayout<XPLBeacon>.size) // 12
(For more information about alignment in Swift, see
Type Layout).
If you read the entire data into the struct then the bytes are assigned
as follows
01 02 0A 00 00 00 0B 00 00 00
A B x x C C C C D D D D
which explains why major/minorVersion are correct, but applicationHostId and versionNumber
are wrong. Reading all members separately from the data is the correct solution.
Since Swift 3 Data conforms to RandomAccessCollection, MutableCollection, RangeReplaceableCollection. So you can simply create a custom initializer to initialise your struct properties as follow:
struct XPLBeacon {
let majorVersion, minorVersion: UInt8 // 1 + 1 = 2 Bytes
let applicationHostId, versionNumber: UInt32 // 4 + 4 = 8 Bytes
init(data: Data) {
self.majorVersion = data[0]
self.minorVersion = data[1]
self.applicationHostId = data
.subdata(in: 2..<6)
.withUnsafeBytes { $0.load(as: UInt32.self) }
self.versionNumber = data
.subdata(in: 6..<10)
.withUnsafeBytes { $0.load(as: UInt32.self) }
}
}
var data = Data([0x01,0x02, 0x0A, 0x00, 0x00, 0x00, 0x0B, 0x00, 0x00,0x00])
print(data as NSData) // "{length = 10, bytes = 0x01020a0000000b000000}\n" <01020a00 00000b00 0000>
let beacon = XPLBeacon(data: data)
beacon.majorVersion // 1
beacon.minorVersion // 2
beacon.applicationHostId // 10
beacon.versionNumber // 11
Following on Leo Dabus answer, I created a slightly more readable constructor:
extension Data {
func object<T>(at index: Index) -> T {
subdata(in: index ..< index.advanced(by: MemoryLayout<T>.size))
.withUnsafeBytes { $0.load(as: T.self) }
}
}
struct XPLBeacon {
var majorVersion: UInt8
var minorVersion: UInt8
var applicationHostId: UInt32
var versionNumber: UInt32
init(data: Data) {
var index = data.startIndex
majorVersion = data.object(at: index)
index += MemoryLayout.size(ofValue: majorVersion)
minorVersion = data.object(at: index)
index += MemoryLayout.size(ofValue: minorVersion)
applicationHostId = data.object(at: index)
index += MemoryLayout.size(ofValue: applicationHostId)
versionNumber = data.object(at: index)
}
}
What is not part of this is of course the checking for the correctness of the data. As other mentioned in comments, this could be done either by having a failable init method or by throwing an Error.

Unsigned right shift operator '>>>' in Swift

How would you implement the equivalent to Java's unsigned right shift operator in Swift?
According to Java's documentation, the unsigned right shift operator ">>>" shifts a zero into the leftmost position, while the leftmost position after ">>" depends on sign extension.
So, for instance,
long s1 = (-7L >>> 16); // result is 281474976710655L
long s2 = (-7L >> 16); // result is -1
In order to implement this in Swift, I would take all the bits except the sign bit by doing something like,
let lsb = Int64.max + negativeNumber + 1
Notice that the number has to be negative! If you overflow the shift operator, the app crashes with EXC_BAD_INSTRUCTION, which it's not very nice...
Also, I'm using Int64 on purpose. Because there's no bigger datatype, doing something like (1 << 63) would overflow the Int64 and also crash. So instead of doing ((1 << 63) - 1 + negativeNumber) in a bigger datatype, I wrote it as Int64.max + negativeNumber - 1.
Then, shift that positive number with the normal logical shift, and OR the bit from the sign in the first left bit after the sign.
let shifted = (lsb >> bits) | 0x4000000000000000
However, that doesn't give me the expected result,
((Int64.max - 7 + 1) >> 16) | 0x4000000000000000 // = 4611826755915743231
Not sure what I'm doing wrong...
Also, would it be possible to name this operator '>>>' and extend Int64?
Edit:
Adding here the solution from OOper below,
infix operator >>> : BitwiseShiftPrecedence
func >>> (lhs: Int64, rhs: Int64) -> Int64 {
return Int64(bitPattern: UInt64(bitPattern: lhs) >> UInt64(rhs))
}
I was implementing the Java Random class in Swift, which also involves truncating 64-bit ints into 32-bit. Thanks to OOper I just realized I can use the truncatingBitPattern initializer to avoid overflow exceptions. The function 'next' as described here becomes this in Swift,
var seed: Int64 = 0
private func next(_ bits: Int32) -> Int32 {
seed = (seed &* 0x5DEECE66D &+ 0xB) & ((1 << 48) - 1)
let shifted : Int64 = seed >>> (48 - Int64(bits))
return Int32(truncatingBitPattern: shifted)
}
One sure way to do it is using the unsigned shift operation of unsigned integer type:
infix operator >>> : BitwiseShiftPrecedence
func >>> (lhs: Int64, rhs: Int64) -> Int64 {
return Int64(bitPattern: UInt64(bitPattern: lhs) >> UInt64(rhs))
}
print(-7 >>> 16) //->281474976710655
(Using -7 for testing with bit count 16 does not seem to be a good example, it loses all significant bits with 16-bit right shift.)
If you want to do it in your way, the bitwise-ORed missing sign bit cannot be a constant 0x4000000000000000. It needs to be 0x8000_0000_0000_0000 (this constant overflows in Swift Int64) when bit count == 0, and needs to be logically shifted with the same bits.
So, you need to write something like this:
infix operator >>>> : BitwiseShiftPrecedence
func >>>> (lhs: Int64, rhs: Int64) -> Int64 {
if lhs >= 0 {
return lhs >> rhs
} else {
return (Int64.max + lhs + 1) >> rhs | (1 << (63-rhs))
}
}
print(-7 >>>> 16) //->281474976710655
It seems far easier to work with unsigned integer types when you need unsigned shift operation.
Swift has unsigned integer types, so there is no need to a separate unsigned right shift operator. That's a choice in Java that followed from the decision to not have unsigned types.

Bitwise and arithmetic operations in swift

Honestly speaking, porting to swift3(from obj-c) is going hard. The easiest but the swiftiest question.
public func readByte() -> UInt8
{
// ...
}
public func readShortInteger() -> Int16
{
return (self.readByte() << 8) + self.readByte();
}
Getting error message from compiler: "Binary operator + cannot be applied to two UInt8 operands."
What is wrong?
ps. What a shame ;)
readByte returns a UInt8 so:
You cannot shift a UInt8 left by 8 bits, you'll lose all its bits.
The type of the expression is UInt8 which cannot fit the Int16 value it is computing.
The type of the expression is UInt8 which is not the annotated return type Int16.
d
func readShortInteger() -> Int16
{
let highByte = self.readByte()
let lowByte = self.readByte()
return Int16(highByte) << 8 | Int16(lowByte)
}
While Swift have a strictly left-right evaluation order of the operands, I refactored the code to make it explicit which byte is read first and which is read second.
Also an OR operator is more self-documenting and semantic.
Apple has some great Swift documentation on this, here:
https://developer.apple.com/library/content/documentation/Swift/Conceptual/Swift_Programming_Language/AdvancedOperators.html
let shiftBits: UInt8 = 4 // 00000100 in binary
shiftBits << 1 // 00001000
shiftBits << 2 // 00010000
shiftBits << 5 // 10000000
shiftBits << 6 // 00000000
shiftBits >> 2 // 00000001