NSCoder for unsigned integers with swift - swift

In my swift app, I really need to store a UInt32 value and retrieve it using NSCoder.
The issue is, there are methods for signed integers :
coder.decodeInteger()
coder.decodeInt32()
coder.decodeInt64()
but not for unsigned integers.
Also, casting an Int32 of negative value in an UInt32 does not seem to work.
Am I missing some point?

The tool you want is init(bitPattern:). This is the same as C-style casting on integers (which is how you use these methods in ObjC when you need unsigned ints). It just reinterprets the bits.
UInt32(bitPattern: coder.decodeInt32())

Related

Why are random generated numbers UInt-32 by default in Swift?

I stumbled upon the fact that when a random number is generated in Swift, it is of type UInt32 by default instead of type Int
What's the reason?
I suspect you are referring to arc4random. This function returns UInt32 because the underlying C function (also called arc4random) returns uint32_t, which is the C equivalent of Swift's UInt32.
I would assume it makes it faster. If you want Int random numbers take a look at GKRandomSource in the game kit, https://developer.apple.com/documentation/gameplaykit/gkrandomsource

Precondition failed: Negative count not allowed

Error:
Precondition failed: Negative count not allowed: file /BuildRoot/Library/Caches/com.apple.xbs/Sources/swiftlang/swiftlang-900.0.74.1/src/swift/stdlib/public/core/StringLegacy.swift, line 49
Code:
String(repeating: "a", count: -1)
Thinking:
Well, it doesn't make sense repeating some string a negative number of times. Since we have types in Swift, why not use an UInt?
Here we have some documentation about it.
Use UInt only when you specifically need an unsigned integer type with
the same size as the platform’s native word size. If this isn’t the
case, Int is preferred, even when the values to be stored are known to
be nonnegative. A consistent use of Int for integer values aids code
interoperability, avoids the need to convert between different number
types, and matches integer type inference, as described in Type Safety
and Type Inference.
Apple Docs
Ok that Int is preferred, therefore the API is just following the rules, but why the Strings API is designed like that? Why this constructor is not private and the a public one with UInt ro something like that? Is there a "real" reason? It this some "undefined behavior" kind of thing?
Also: https://forums.developer.apple.com/thread/98594
This isn't undefined behavior — in fact, a precondition indicates the exact opposite: an explicit check was made to ensure that the given count is positive.
As to why the parameter is an Int and not a UInt — this is a consequence of two decisions made early in the design of Swift:
Unlike C and Objective-C, Swift does not allow implicit (or even explicit) casting between integer types. You cannot pass an Int to function which takes a UInt, and vice versa, nor will the following cast succeed: myInt as? UInt. Swift's preferred method of converting is using initializers: UInt(myInt)
Since Ints are more generally applicable than UInts, they would be the preferred integer type
As such, since converting between Ints and UInts can be cumbersome and verbose, the easiest way to interoperate between the largest number of APIs is to write them all in terms of the common integer currency type: Int. As the docs you quote mention, this "aids code interoperability, avoids the need to convert between different number types, and matches integer type inference"; trapping at runtime on invalid input is a tradeoff of this decision.
In fact, Int is so strongly ingrained in Swift that when Apple framework interfaces are imported into Swift from Objective-C, NSUInteger parameters and return types are converted to Int and not UInt, for significantly easier interoperability.

Should I prefer to use specifically-sized Int (Int8 and Int16) in Swift?

I'm converting projects from Java to Swift. My Java code uses small data types (short, byte). Should I use the Int16, Int8 equivalents in Swift, or only use the Int type for all? Where is the memory optimization as well as the speed?
Use Double and Int unless compelled by circumstances to do otherwise. The other types are all for compatibility with externalities.
For example, you have to use CGFloat to interchange with Core Graphics, and an occasional UIKit object requires a Float instead of a Double; and you might have to use Int8 for purposes of interchange with some C API or to deal with data downloaded from the network.
But Double and Int are the "natural" Swift types, and Swift numerics are very rigid and clumsy, so you should stick with those types wherever you can.

How to convert the byte data of Int32 to UInt32 and back?

Academically Natured Question:
How can the byte data of an Int32 with the value of -1 be cast into an UInt32? (can SWIFT do that?)
Understanding:
I know that -1 isn't a value that can be represented by an Unsigned Integer, as UInts only contain Integers above -1
However I also know that both Int32 and UInt32 take the same amount of space in Bytes (4*8=32). That byte space should be able to be used for either type, regardless if it represents the same value... which it obviously wouldn't.
Conclusion:
There should be some simple way to take the raw bit data of an Int32 and use it for a UInt32...
Cast the variable via bitPattern (thanks Jthora). Plenty of help here on SO for this:
Converting signed to unsigned in Swift
Int to UInt (and vice versa) bit casting in Swift
For 32-bits, 0xffffffff=> -1 when signed or 4294967295 unsigned.

What's the purpose of unsigned type here?

Sorry for the multitude of iPhone Programming newb questions but..
what is the reason for having an unsigned type for something such as
- (unsigned)count
for the NSArray class.
Why not just define it as
- (int)count
?
An array can't have a negative number of items, so an unsigned integer is a much better match for the kinds of values this method will actually return.