Optional UInt8? gives 2 byte memory size - swift

UInt8 memory size is 1 byte . but when i make it optional value, it gives 2 byte of size.
var serail : UInt8? = 255
print(MemoryLayout.size(ofValue: serail)) // it gives 2 byte size.
var serail : UInt8 = 255
print(MemoryLayout.size(ofValue: serail)) // it gives 1 byte size.
how to get exactly 1 byte memory size for Integer value

Under the hood, an optional is an enum and looks something like this:
enum Optional<Wrapped> {
case some(Wrapped) // not nil
case none // nil
}
The symbols ? and ! are just shorthands for referring to this optional enum. This extra layer causes it to be 2 bytes large.
However, if you unwrap the optional, the returned value is the wrapped value itself, so it becomes 1 byte.

Related

memset with Swift 5.2

Sometimes I need to clear a part of a byte array to zero. With a for loop this is slow. Much faster is using the memset() function. I have the following code which will clear elements 200 to 299 of the array "a".
var a: [UInt8] = [UInt8](repeating: 1, count: 1000) // a[0..999] set to 1
start = 200
length = 100
memset(&a + start, 0, length) // a[100..199] set to 0
This worked well until Swift 5.2 with Xcode 11.4 came out. Now it works, too, but a warning appears:
Inout expression creates a temporary pointer, but argument #1 should be a pointer that outlives the call to '+'
How can this made, to have no warning.? Andybody has an idea?
Some more explanations is whows in Xcode:
Implicit argument conversion from '[UInt8]' to 'UnsafeMutableRawPointer' produces a pointer valid only for the duration of the call to '+'
Use the 'withUnsafeMutableBytes' method on Array in order to explicitly convert argument to buffer pointer valid for a defined scope
I do not understand, what this means.
The hint to use withUnsafeMutableBytes is exactly what you should follow:
a.withUnsafeMutableBytes { ptr in
_ = memset(ptr.baseAddress! + start, 0, length)
}

How to convert byte position in swift

How would I convert this properly? the value I need is byte number 0-1, has a format of uint16 and its units are in degrees.
print("derived : \(characteristic.value!)")
print(String(bytes: characteristic.value!, encoding: .utf16))
derived : 20 bytes
Optional("\0{Ͽ⌜ƀ")
You just need to get the first two bytes of the Data as UInt16.
var result: UInt16 = 0
_ = withUnsafeMutableBytes(of: &result) {characteristic.value!.copyBytes(to: $0, from: 0...1)}
print("\(result) degrees")
Assuming your format of uint16 is in Little-Endian as well as iOS, which is more often found in BLE characteristics.
(When you get the first value in Data, from: 0...1 is not needed, but you may want some data in other position.)

Why is startIndex not equal to endIndex shifted to startIndex position in String in Swift?

Look at the program:
let s = "1"
print(s.startIndex)
print(s.index(before: s.endIndex))
print(s.index(before: s.endIndex) == s.startIndex)
It returns:
Index(_rawBits: 0)
Index(_rawBits: 256)
true
So, the same position in the string is represented with rawBits 0 and 256. Why?
The raw bits of the index are an implementation detail. As you see in your example, the two values are equal (they return true for ==).
As to the current implementation, bit 8 is set, which is not part of the position. That's a cached value for the offset to the next grapheme cluster, which is 1 byte away. It's telling you that there's one byte to the next grapheme (which it didn't know until you calculated the endIndex).
Equality between two String.Indexes is defined on the upper 50 bits of the _rawBits, aka orderingValue, as follows:
extension String.Index: Equatable {
#inlinable #inline(__always)
public static func == (lhs: String.Index, rhs: String.Index) -> Bool {
return lhs.orderingValue == rhs.orderingValue
}
}
And since 0 &>> 14 and 256 &>> 14 both equal 0, the positions are equal, and thus the indices are considered equal.
&>> is the infix operator to shift bits to the right, masking the shift amount to 64 bits.

Swift: Why does UInt.max == -1

As the title states, lldb reports the value of UInt.max to be a UInt of -1, which seems highly illogical. Considering that let uint: UInt = -1 doesn't even compile, how is this even possible? I don't see any way to have a negative value of UInt at runtime because the initializer will crash if given a negative value. I want to know the actual maximum value of UInt.
The Int value of -1 and the UInt value UInt.max have the same bit representation in memory.
You can see that if you do:
let i = Int(bitPattern: UInt.max) // i == -1
and in the opposite direction:
if UInt(bitPattern: Int(-1)) == UInt.max {
print("same")
}
Output:
same
The debugger is incorrectly displaying UInt.max as a signed Int. They have the same bit representation in memory (0xffffffffffffffff on a 64-bit system such as iPhone 6 and 0xffffffff on a 32-bit system such as iPhone 5), and the debugger apparently chooses to show that value as an Int.
You can see the same issue if you do:
print(String(format: "%d", UInt.max)) // prints "-1"
It doesn't mean UInt.max is -1, just that both have the same representation in memory.
To see the maximum value of UInt, do the following in an app or on a Swift Playground:
print(UInt.max)
This will print 18446744073709551615 on a 64-bit system (such as a Macintosh or iPhone 6) and 4294967295 on a 32-bit system (such as an iPhone 5).
In lldb:
(lldb) p String(UInt.max)
(String) $R0 = "18446744073709551615"
(lldb)
This sounds like an instance of transitioning between interpreting a literal value under Two's Complement and Unsigned representations.
In the unsigned world, a binary number is just a binary number, so the more digits that are 1, the bigger the number, since there is no need to encode the sign somehow. In order to represent the largest number of signed values, the Two's Complement encoding scheme encodes positive values as normal provided the most extreme bit is not a 1. If the most extreme bit is a 1, then the bits are reinterpreted as described at https://en.wikipedia.org/wiki/Two%27s_complement.
As shown on Wikipedia, the Two's Complement representation of -1 has all bits set to 1, or the maximal unsigned value.

Why the enum size is different. in Swift

I am making an app that connect between iPhone.
It use enum specify UInt32. but sizeof() is different size.
Why not equal a UInt32?
enum Message:UInt32{
case A = 1
case B = 2
case C = 3
}
sizeof(UInt32) // 4
sizeof(Message) // 1
They must be optimising the storage, the following works:
sizeofValue(Message.A.toRaw()) // 4
When calling an instance of an enum object with no additional method, it returns the type of enum it is (UInt32, in this case).
In reference to this answer, Types in objective-c on iPhone , an Unsigned Int is 32 4 bytes.