Honestly speaking, porting to swift3(from obj-c) is going hard. The easiest but the swiftiest question.
public func readByte() -> UInt8
{
// ...
}
public func readShortInteger() -> Int16
{
return (self.readByte() << 8) + self.readByte();
}
Getting error message from compiler: "Binary operator + cannot be applied to two UInt8 operands."
What is wrong?
ps. What a shame ;)
readByte returns a UInt8 so:
You cannot shift a UInt8 left by 8 bits, you'll lose all its bits.
The type of the expression is UInt8 which cannot fit the Int16 value it is computing.
The type of the expression is UInt8 which is not the annotated return type Int16.
d
func readShortInteger() -> Int16
{
let highByte = self.readByte()
let lowByte = self.readByte()
return Int16(highByte) << 8 | Int16(lowByte)
}
While Swift have a strictly left-right evaluation order of the operands, I refactored the code to make it explicit which byte is read first and which is read second.
Also an OR operator is more self-documenting and semantic.
Apple has some great Swift documentation on this, here:
https://developer.apple.com/library/content/documentation/Swift/Conceptual/Swift_Programming_Language/AdvancedOperators.html
let shiftBits: UInt8 = 4 // 00000100 in binary
shiftBits << 1 // 00001000
shiftBits << 2 // 00010000
shiftBits << 5 // 10000000
shiftBits << 6 // 00000000
shiftBits >> 2 // 00000001
Related
I have a data object, containing multiple bytes of information. For an algorithm I need this Data to be shifted left or right by one bit. Is there any efficient way to do this?
I tired implementing it something like this, but I believe this is highly ineffective solution:
fileprivate let LSBofByte: UInt8 = 0x01 // 0000 0001
fileprivate let MSBofByte: UInt8 = 0x80 // 1000 0000
extension Data {
mutating func leftShiftByteArray() {
self[0] <<= 1 // Shift first byte
for i in 1...self.count-1 {
if (self[i] & MSBofByte) == MSBofByte {
self[i-1] |= LSBofByte // Carry the leading 1 from current byte to previous
}
self[i] <<= 1 // Shift other bytes
}
}
}
I need to sign-extend an 8bit value to 12 bits. In C, I can do it this way. I read Apple's BinaryInteger protocol documentation, but it didn't explain sign extending to a variable number of bits (and i'm also pretty new at Swift). How can I do this in Swift, assuming val is UInt8 and numbits is 12?
#define MASKBITS(numbits) ((1 << numbits) - 1)
#define SIGNEXTEND_TO_16(val, numbits) \
( \
(int16_t)((val & MASKBITS(numbits)) | ( \
(val & (1 << (numbits-1))) ? ~MASKBITS(numbits) : 0) \
))
You can use Int8(bitPattern:) to convert the given unsigned
value to a signed value with the same binary representation,
then sign extend by converting to Int16, make unsigned again, and finally truncate
to the given number of bits:
func signExtend(val: UInt8, numBits: Int) -> UInt16 {
// Sign extend to unsigned 16-bit:
var extended = UInt16(bitPattern: Int16(Int8(bitPattern: val)))
// Truncate to given number of bits:
if numBits < 16 {
extended = extended & ((1 << numBits) - 1)
}
return extended
}
Example:
for i in 1...16 {
let x = signExtend(val: 200, numBits: i)
print(String(format: "%2d %04X", i, x))
}
Output:
1 0000
2 0000
3 0000
4 0008
5 0008
6 0008
7 0048
8 00C8
9 01C8
10 03C8
11 07C8
12 0FC8
13 1FC8
14 3FC8
15 7FC8
16 FFC8
I had the same question in the context of bitstream parsing. I needed code to parse n bit two's complement values into Int32. Here is my solution that works without any condition:
extension UInt32 {
func signExtension(n: Int) -> Int32 {
let signed = Int32.init(bitPattern: self << (32 - n))
let result = signed >> (32 - n)
return result
}
}
And a unit test function showing how to use that code:
func testSignExtension_N_2_3() {
let unsignedValue: UInt32 = 0b110
let signedValue: Int32 = unsignedValue.signExtension(n: 3)
XCTAssertEqual(signedValue, -2)
}
How would you implement the equivalent to Java's unsigned right shift operator in Swift?
According to Java's documentation, the unsigned right shift operator ">>>" shifts a zero into the leftmost position, while the leftmost position after ">>" depends on sign extension.
So, for instance,
long s1 = (-7L >>> 16); // result is 281474976710655L
long s2 = (-7L >> 16); // result is -1
In order to implement this in Swift, I would take all the bits except the sign bit by doing something like,
let lsb = Int64.max + negativeNumber + 1
Notice that the number has to be negative! If you overflow the shift operator, the app crashes with EXC_BAD_INSTRUCTION, which it's not very nice...
Also, I'm using Int64 on purpose. Because there's no bigger datatype, doing something like (1 << 63) would overflow the Int64 and also crash. So instead of doing ((1 << 63) - 1 + negativeNumber) in a bigger datatype, I wrote it as Int64.max + negativeNumber - 1.
Then, shift that positive number with the normal logical shift, and OR the bit from the sign in the first left bit after the sign.
let shifted = (lsb >> bits) | 0x4000000000000000
However, that doesn't give me the expected result,
((Int64.max - 7 + 1) >> 16) | 0x4000000000000000 // = 4611826755915743231
Not sure what I'm doing wrong...
Also, would it be possible to name this operator '>>>' and extend Int64?
Edit:
Adding here the solution from OOper below,
infix operator >>> : BitwiseShiftPrecedence
func >>> (lhs: Int64, rhs: Int64) -> Int64 {
return Int64(bitPattern: UInt64(bitPattern: lhs) >> UInt64(rhs))
}
I was implementing the Java Random class in Swift, which also involves truncating 64-bit ints into 32-bit. Thanks to OOper I just realized I can use the truncatingBitPattern initializer to avoid overflow exceptions. The function 'next' as described here becomes this in Swift,
var seed: Int64 = 0
private func next(_ bits: Int32) -> Int32 {
seed = (seed &* 0x5DEECE66D &+ 0xB) & ((1 << 48) - 1)
let shifted : Int64 = seed >>> (48 - Int64(bits))
return Int32(truncatingBitPattern: shifted)
}
One sure way to do it is using the unsigned shift operation of unsigned integer type:
infix operator >>> : BitwiseShiftPrecedence
func >>> (lhs: Int64, rhs: Int64) -> Int64 {
return Int64(bitPattern: UInt64(bitPattern: lhs) >> UInt64(rhs))
}
print(-7 >>> 16) //->281474976710655
(Using -7 for testing with bit count 16 does not seem to be a good example, it loses all significant bits with 16-bit right shift.)
If you want to do it in your way, the bitwise-ORed missing sign bit cannot be a constant 0x4000000000000000. It needs to be 0x8000_0000_0000_0000 (this constant overflows in Swift Int64) when bit count == 0, and needs to be logically shifted with the same bits.
So, you need to write something like this:
infix operator >>>> : BitwiseShiftPrecedence
func >>>> (lhs: Int64, rhs: Int64) -> Int64 {
if lhs >= 0 {
return lhs >> rhs
} else {
return (Int64.max + lhs + 1) >> rhs | (1 << (63-rhs))
}
}
print(-7 >>>> 16) //->281474976710655
It seems far easier to work with unsigned integer types when you need unsigned shift operation.
Swift has unsigned integer types, so there is no need to a separate unsigned right shift operator. That's a choice in Java that followed from the decision to not have unsigned types.
I would like to be able to read in half floats from a binary file and convert them to a float in Swift. I've looked at several conversions from other languages such as Java and C#, however I have not been able to get the correct value corresponding to the half float. If anyone could help me with an implementation I would appreciate it. A conversion from Float to Half Float would also be extremely helpful. Here's an implementation I attempted to convert from this Java implementation.
static func toFloat(value: UInt16) -> Float {
let value = Int32(value)
var mantissa = Int32(value) & 0x03ff
var exp: Int32 = Int32(value) & 0x7c00
if(exp == 0x7c00) {
exp = 0x3fc00
} else if exp != 0 {
exp += 0x1c000
if(mantissa == 0 && exp > 0x1c400) {
return Float((value & 0x8000) << 16 | exp << 13 | 0x3ff)
}
} else if mantissa != 0 {
exp = 0x1c400
repeat {
mantissa << 1
exp -= 0x400
} while ((mantissa & 0x400) == 0)
mantissa &= 0x3ff
}
return Float((value & 0x80000) << 16 | (exp | mantissa) << 13)
}
If you have an array of half-precision data, you can convert all of it to float at once using vImageConvert_Planar16FtoPlanarF, which is provided by Accelerate.framework:
import Accelerate
let n = 2
var input: [UInt16] = [ 0x3c00, 0xbc00 ]
var output = [Float](count: n, repeatedValue: 0)
var src = vImage_Buffer(data:&input, height:1, width:UInt(n), rowBytes:2*n)
var dst = vImage_Buffer(data:&output, height:1, width:UInt(n), rowBytes:4*n)
vImageConvert_Planar16FtoPlanarF(&src, &dst, 0)
// output now contains [1.0, -1.0]
You can also use this method to convert individual values, but it's fairly heavyweight if that's all that you're doing; on the other hand it's extremely efficient if you have large buffers of values to convert.
If you need to convert isolated values, you might put something like the following C function in your bridging header and use it from Swift:
#include <stdint.h>
static inline float loadFromF16(const uint16_t *pointer) { return *(const __fp16 *)pointer; }
This will use hardware conversion instructions when you're compiling for targets that have them (armv7s, arm64, x86_64h), and call a reasonably good software conversion routine when compiling for targets that don't have hardware support.
addendum: going the other way
You can convert float to half-precision in pretty much the same way:
static inline storeAsF16(float value, uint16_t *pointer) { *(const __fp16 *)pointer = value; }
Or use the function vImageConvert_PlanarFtoPlanar16F.
println(UInt8(1 << 7)) // OK
println(UInt16(1 << 15)) // OK
println(UInt32(1 << 31)) // OK
println(UInt64(1 << 63)) // Crash
I would like to understand why this happens for UInt64 only. Thanks!
Edit:
To make matters more confusing, the following all work:
println(1 << UInt8(7))
println(1 << UInt16(15))
println(1 << UInt32(31))
println(1 << UInt64(63))
My guess is that an intermediate result produced by computing 1 << 63 is too large.
Try println(UInt64(1) << UInt64(63)).
The type inferrer didn't do its job well and decided that 1 << 63 is a UInt32 and used this function: func <<(lhs: UInt32, rhs: UInt32) -> UInt32
println(1 << UInt64(63)) works because the compiler knows that since UInt64(63) is a UInt64, then the integer literal 1 is inferred to be a UInt64, therefore the operation results in a UInt64 and is not out of bounds.