"sign" function in Swift - swift

Is there a function that returns +1 for positive numbers and -1 for negatives in Swift?
I looked through the enormous file that appears if you right-click->definitions on a typical function, but if it's in there I don't know it's name.
I did this:
(num < 0 ? -1 : 1)
But I'd rather use a built-in function if there is any--for self-documenting reasons at the minimum.

Swift 4
As has already been pointed out in #Wil Shipley's Swift 4 answer, there is now a sign property in the FloatingPoint protocol:
FloatingPointSign
The sign of a floating-point value.
Enumeration Cases
case minus The sign for a negative value.
case plus The sign for a positive value.
However, the comments to the sign blueprint in the source code of FloatingPoint contains important information that is (yet?) not present in the generated docs:
/// The sign of the floating-point value.
///
/// The `sign` property is `.minus` if the value's signbit is set, and
/// `.plus` otherwise. For example:
///
/// let x = -33.375
/// // x.sign == .minus
///
/// Do not use this property to check whether a floating point value is
/// negative. For a value `x`, the comparison `x.sign == .minus` is not
/// necessarily the same as `x < 0`. In particular, `x.sign == .minus` if
/// `x` is -0, and while `x < 0` is always `false` if `x` is NaN, `x.sign`
/// could be either `.plus` or `.minus`.
emphasizing, "... .minus if the value's signbit is set" and "Do not use this property to check whether a floating point value is negative".
Summa summarum: use the new sign property of the FloatingPoint protocol to actually check whether the value's signbit is set or not, but make sure to use some care if attempting to use this property to tell whether a number is negative or not.
var f: Float = 0.0
if case .minus = (-f).sign { print("A. f is negative!") }
f = -Float.nan
if f < 0 { print("B. f is negative!") }
if case .minus = f.sign { print("C. f is negative!") }
// A. f is negative!
// C. f is negative!
Swift 3
W.r.t. built-in functions, I think the closest you'll get is the Foundation method copysign(_: Double, _: Double) -> Double
let foo = -15.2
let sign = copysign(1.0, foo) // -1.0 (Double)
Naturally needing some type conversion in case you're not operating on a number of type Double.
However, I see no reason why not to create your own extension fit to your needs, especially for such a simple function as sign as they needn't get bloated, e.g.
extension IntegerType {
func sign() -> Int {
return (self < 0 ? -1 : 1)
}
/* or, use signature: func sign() -> Self */
}
extension FloatingPointType {
func sign() -> Int {
return (self < Self(0) ? -1 : 1)
}
}
(here yielding 1 also for 0, as in the example in your question).
(Edit with regard to your comment below)
An alternative solution to the above would be to define your own protocol with a default implementation of sign(), so that all types conforming to this protocol would have access to that sign() method.
protocol Signable {
init()
func <(lhs:Self, rhs:Self) -> Bool
}
extension Signable {
func sign() -> Int {
return (self < Self() ? -1 : 1)
}
}
/* extend signed integer types to Signable */
extension Int: Signable { } // already have < and init() functions, OK
extension Int8 : Signable { } // ...
extension Int16 : Signable { }
extension Int32 : Signable { }
extension Int64 : Signable { }
/* extend floating point types to Signable */
extension Double : Signable { }
extension Float : Signable { }
extension CGFloat : Signable { }
/* example usage */
let foo = -4.2
let bar = 42
foo.sign() // -1 (Int)
bar.sign() // 1 (Int)

The simd library has a sign method:
import simd
sign(-100.0) // returns -1
sign(100.0) // returns 1
sign(0.0) // returns 0
You get simd for free if you import SpriteKit.

In Swift-4 floats have a new property:
public var sign: FloatingPointSign { get }
(However this only checks the sign bit, so it’ll fail for some cases like -0 — see that accepted answer above.)

You can use signum() if your value is an integer.
https://developer.apple.com/documentation/swift/int/2886673-signum
Here is a code snippet to make it clear;
let negative: Int = -10
let zero: Int = 0
let positive: Int = 10
print(negative.signum()) // prints "-1"
print(zero.signum()) // prints "0"
print(positive.signum()) // prints "1"

Use:
let result = signbit(number)
This will return 1 for negative numbers and 0 for positives.
let number = -1.0
print("\(signbit(number))")
1
let number = 1.0
print("\(signbit(number))")
0

FloatingPointType has a built-in computed variable but it returns a boolean. If you only need this operation on floats you can use an extension like this:
extension FloatingPointType {
var signValue: Int {
return isSignMinus ? -1 : 1
}
}
However, I believe the best approach would be to extend the SignedNumberType protocol.
extension SignedNumberType {
var signValue: Int {
return (self >= -self) ? 1 : -1
}
}
If you want 0 to return -1 then just change >= to >.
Test cases:
print(3.0.signValue)
print(0.signValue)
print(-3.0.signValue)

Since copysign cannot be used over Integer I'm using this extension:
extension Comparable where Self: SignedNumber {
var sign: Int {
guard self != -self else {
return 0
}
return self > -self ? 1 : -1
}
}

Related

Swift 4.2+ seeding a random number generator

I'm trying to generate seeded random numbers with Swift 4.2+, with the Int.random() function, however there is no given implementation that allows for the random number generator to be seeded. As far as I can tell, the only way to do this is to create a new random number generator that conforms to the RandomNumberGenerator protocol. Does anyone have a recommendation for a better way to do it, or an implementation of a RandomNumberGenerator conforming class that has the functionality of being seeded, and how to implement it?
Also, I have seen two functions srand and drand mentioned a couple times while I was looking for a solution to this, but judging by how rarely it was mentioned, I'm not sure if using it is bad convention, and I also can't find any documentation on them.
I'm looking for the simplest solution, not necessarily the most secure or fastest performance one (e.g. using an external library would not be ideal).
Update: By "seeded", I mean that I was to pass in a seed to the random number generator so that if I pass in the same seed to two different devices or at two different times, the generator will produce the same numbers. The purpose is that I'm randomly generating data for an app, and rather than save all that data to a database, I want to save the seed and regenerate the data with that seed every time the user loads the app.
So I used Martin R's suggestion to use GamePlayKit's GKMersenneTwisterRandomSource to make a class that conformed to the RandomNumberGenerator protocol, which I was able to use an instance of in functions like Int.random():
import GameplayKit
class SeededGenerator: RandomNumberGenerator {
let seed: UInt64
private let generator: GKMersenneTwisterRandomSource
convenience init() {
self.init(seed: 0)
}
init(seed: UInt64) {
self.seed = seed
generator = GKMersenneTwisterRandomSource(seed: seed)
}
func next<T>(upperBound: T) -> T where T : FixedWidthInteger, T : UnsignedInteger {
return T(abs(generator.nextInt(upperBound: Int(upperBound))))
}
func next<T>() -> T where T : FixedWidthInteger, T : UnsignedInteger {
return T(abs(generator.nextInt()))
}
}
Usage:
// Make a random seed and store in a database
let seed = UInt64.random(in: UInt64.min ... UInt64.max)
var generator = Generator(seed: seed)
// Or if you just need the seeding ability for testing,
// var generator = Generator()
// uses a default seed of 0
let chars = ['a','b','c','d','e','f']
let randomChar = chars.randomElement(using: &generator)
let randomInt = Int.random(in: 0 ..< 1000, using: &generator)
// etc.
This gave me the flexibility and easy implementation that I needed by combining the seeding functionality of GKMersenneTwisterRandomSource and the simplicity of the standard library's random functions (like .randomElement() for arrays and .random() for Int, Bool, Double, etc.)
Here's alternative to the answer from RPatel99 that accounts GKRandom values range.
import GameKit
struct ArbitraryRandomNumberGenerator : RandomNumberGenerator {
mutating func next() -> UInt64 {
// GKRandom produces values in [INT32_MIN, INT32_MAX] range; hence we need two numbers to produce 64-bit value.
let next1 = UInt64(bitPattern: Int64(gkrandom.nextInt()))
let next2 = UInt64(bitPattern: Int64(gkrandom.nextInt()))
return next1 ^ (next2 << 32)
}
init(seed: UInt64) {
self.gkrandom = GKMersenneTwisterRandomSource(seed: seed)
}
private let gkrandom: GKRandom
}
Simplified version for Swift 5:
struct RandomNumberGeneratorWithSeed: RandomNumberGenerator {
init(seed: Int) { srand48(seed) }
func next() -> UInt64 { return UInt64(drand48() * Double(UInt64.max)) }
}
#State var seededGenerator = RandomNumberGeneratorWithSeed(seed: 123)
// when deployed used seed: Int.random(in: 0..<Int.max)
Then to use it:
let rand0to99 = Int.random(in: 0..<100, using: &seededGenerator)
I ended up using srand48() and drand48() to generate a pseudo-random number with a seed for a specific test.
class SeededRandomNumberGenerator : RandomNumberGenerator {
let range: ClosedRange<Double> = Double(UInt64.min) ... Double(UInt64.max)
init(seed: Int) {
// srand48() — Pseudo-random number initializer
srand48(seed)
}
func next() -> UInt64 {
// drand48() — Pseudo-random number generator
return UInt64(range.lowerBound + (range.upperBound - range.lowerBound) * drand48())
}
}
So, in production the implementation uses the SystemRandomNumberGenerator but in the test suite it uses the SeededRandomNumberGenerator.
Example:
let messageFixtures: [Any] = [
"a string",
["some", ["values": 456]],
]
var seededRandomNumberGenerator = SeededRandomNumberGenerator(seed: 13)
func randomMessageData() -> Any {
return messageFixtures.randomElement(using: &seededRandomNumberGenerator)!
}
// Always return the same element in the same order
randomMessageData() //"a string"
randomMessageData() //["some", ["values": 456]]
randomMessageData() //["some", ["values": 456]]
randomMessageData() //["some", ["values": 456]]
randomMessageData() //"a string"
Looks like Swift's implementation of RandomNumberGenerator.next(using:) changed in 2019. This affects Collection.randomElement(using:) and causes it to always return the first element if your generator's next()->UInt64 implementation doesn't produce values uniformly across the domain of UInt64. The GKRandom solution provided here is therefore problematic because it's next->Int method states:
* The value is in the range of [INT32_MIN, INT32_MAX].
Here's a solution that works for me using the RNG in Swift's TensorFlow found here:
public struct ARC4RandomNumberGenerator: RandomNumberGenerator {
var state: [UInt8] = Array(0...255)
var iPos: UInt8 = 0
var jPos: UInt8 = 0
/// Initialize ARC4RandomNumberGenerator using an array of UInt8. The array
/// must have length between 1 and 256 inclusive.
public init(seed: [UInt8]) {
precondition(seed.count > 0, "Length of seed must be positive")
precondition(seed.count <= 256, "Length of seed must be at most 256")
var j: UInt8 = 0
for i: UInt8 in 0...255 {
j &+= S(i) &+ seed[Int(i) % seed.count]
swapAt(i, j)
}
}
// Produce the next random UInt64 from the stream, and advance the internal
// state.
public mutating func next() -> UInt64 {
var result: UInt64 = 0
for _ in 0..<UInt64.bitWidth / UInt8.bitWidth {
result <<= UInt8.bitWidth
result += UInt64(nextByte())
}
print(result)
return result
}
// Helper to access the state.
private func S(_ index: UInt8) -> UInt8 {
return state[Int(index)]
}
// Helper to swap elements of the state.
private mutating func swapAt(_ i: UInt8, _ j: UInt8) {
state.swapAt(Int(i), Int(j))
}
// Generates the next byte in the keystream.
private mutating func nextByte() -> UInt8 {
iPos &+= 1
jPos &+= S(iPos)
swapAt(iPos, jPos)
return S(S(iPos) &+ S(jPos))
}
}
Hat tip to my coworkers Samuel, Noah, and Stephen who helped me get to the bottom of this.

Integer conversion to generic type for calculating average. Non-nominal type does not support explicit initialization

I am trying to create a generic Queue class that has an average function however I am having trouble doing so because I need a protocol that somehow says that T(Int) is a valid operation.
This was my attempt
class Queue<T:Numeric & Comparable> {
private var array:[T]
.....
func average() -> T {
return sum() / T(array.count)
}
}
However for obvious reasons the compiler says that I cant do that because T does not support explicit initialization. What is the name of a protocol that implements this behavior or how can I code my own?
Note that Numeric Protocol also includes FloatingPoint types, so you should constrain your Queue generic type to BinaryInteger. And regarding your average return type you should return Double instead of the generic integer. Your Queue class should look like this:
class Queue<T: BinaryInteger & Comparable> {
private var array: [T] = []
init(array: [T]) {
self.array = array
}
func sum() -> T {
return array.reduce(0, +)
}
func average() -> Double {
return array.isEmpty ? 0 : Double(Int(sum())) / Double(array.count)
}
// If you would like your average to return the generic type instead of Double you can use numericCast method which traps on overflow and converts a value when the destination type can be inferred from the context.
// func average() -> T {
// return sum() / numericCast(array.count)
// }
}
Playground Testing
let queue = Queue(array: [1,2,3,4,5])
queue.sum() // 15
queue.average() // 3
If you would like to extend an array of Numeric, BinaryInteger or FloatingPoint types you can check this answer.

Elegant `bounded` methodology in Swift

I'm looking for a more elegant way to create bounded limiters for numbers, primarily to be used in setters. There are plenty of techniques for determining whether a value falls within bounds, but I don't see any native functions for forcing an incoming value to conform to those bounds.
The accepted answer here comes close but I want to cap the values rather than merely enforce them.
Here's what I have so far. I'm not sure about the Int extension. And I'd prefer to collapse the if-else into a single elegant line of code, if possible. Ideally I'd like to shorten the actual implementation in the struct, as well.
extension Int {
func bounded(_ min: Int, _ max: Int) -> Int {
if self < min {
return min
} else if self > max {
return max
} else {
return self
}
}
}
print(5.bounded(4, 6)) // 5
print(5.bounded(1, 3)) // 3
print(5.bounded(6, 9)) // 6
// Used in a sentence
struct Animal {
var _legs: Int = 4
var legs: Int {
get {
return _legs
}
set {
_legs = newValue.bounded(1, 4)
}
}
}
var dog = Animal()
print(dog.legs) // 4
dog.legs = 3
print(dog.legs) // 3
dog.legs = 5
print(dog.legs) // 4
dog.legs = 0
print(dog.legs) // 1
This is Apple's own approach, taken from this sample code:
func clamp<T: Comparable>(value: T, minimum: T, maximum: T) -> T {
return min(max(value, minimum), maximum)
}
I would generalize this extension to any Comparable, so that more types can benefit from it. Also, I would change the parameter to be a ClosedRange<Self> rather than two separate Self parameters, because that's the more common way of handling ranges in Swift. That'll come in especially handy when dealing with array indices.
extension Comparable {
func clamped(to r: ClosedRange<Self>) -> Self {
let min = r.lowerBound, max = r.upperBound
return self < min ? min : (max < self ? max : self)
}
}
// Usage examples:
10.clamped(to: 0...5) // => 5
"a".clamped(to: "x"..."z") // => "x"
-1.clamped(to: 0...1) // => 0
A very clean alternative to your if-else statements, keeping the readability would be:
extension Comparable{
func clamp(_ min: Self,_ max: Self) -> Self{
return min...max~=self ? self : (max < self ? max : min)
}
}
I think this is a good alternative to using a range as parameter, because, in my opinion, it is annoying to write 6.clamp((4...5)) each time insted of 6.clamp(4,5).
When it comes to your struct, I think you should not use this clamp extension at all, because, say, 100 does not mean 4... I cannot see the reason for doing this, but it's up to you.

Trying to extend IntegerType (and FloatingPointType); Why can't all Int types be converted to NSTimeInterval

(This probably needs a better title...)
I would like to have a set of accessors I can use in code to quickly express time durations. E.g:
42.seconds
3.14.minutes
0.5.hours
13.days
This post illustrates that you can't just do it with a simple new protocol, extension, and forcing IntegerType and FloatingPointType to adopt that. So I thought I'd just go the more redundant route and just extend IntegerType directly (and then repeat the code for FloatingPointType).
extension IntegerType {
var seconds:NSTimeInterval {
return NSTimeInterval(self)
}
}
The error generated is confusing:
Playground execution failed: /var/folders/2k/6y8rslzn1m95gjpg534j7v8jzr03tz/T/./lldb/9325/playground131.swift:31:10: error: cannot invoke initializer for type 'NSTimeInterval' with an argument list of type '(Self)'
return NSTimeInterval(self)
^
/var/folders/2k/6y8rslzn1m95gjpg534j7v8jzr03tz/T/./lldb/9325/playground131.swift:31:10: note: overloads for 'NSTimeInterval' exist with these partially matching parameter lists: (Double), (UInt8), (Int8), (UInt16), (Int16), (UInt32), (Int32), (UInt64), (Int64), (UInt), (Int), (Float), (Float80), (String), (CGFloat), (NSNumber)
return NSTimeInterval(self)
What confuses me is that it seems to say that I can't do an NSTimeInterval() initializer with (Self), but everything Self represents is listed in the next line where it shows all of the possible initializers of NSTimeInterval. What am I missing here?
Aside: I would love it if there were a well written tutorial on Swift's type system and doing these kinds of things. The intermediate/advanced stuff is just not well covered in Apple's sparse Swift documentation
Update/Clarification:
What I want is to be able to evaluate any of the above expressions:
42.seconds --> 42
3.14.minutes --> 188.4
0.5.hours --> 1800
13.days --> 1123200
Furthermore, I want the return type of these to be NSTimeInterval (type alias for Double), such that:
42.seconds is NSTimeInterval --> true
3.14.minutes is NSTimeInterval --> true
0.5.hours is NSTimeInterval --> true
13.days is NSTimeInterval --> true
I know that I can achieve this by simply extending Double and Int as such:
extension Int {
var seconds:NSTimeInterval {
return NSTimeInterval(self)
}
var minutes:NSTimeInterval {
return NSTimeInterval(self * 60)
}
var hours:NSTimeInterval {
return NSTimeInterval(self * 3600)
}
var days:NSTimeInterval {
return NSTimeInterval(self * 3600 * 24)
}
}
extension Double {
var seconds:NSTimeInterval {
return NSTimeInterval(self)
}
var minutes:NSTimeInterval {
return NSTimeInterval(self * 60)
}
var hours:NSTimeInterval {
return NSTimeInterval(self * 3600)
}
var days:NSTimeInterval {
return NSTimeInterval(self * 3600 * 24)
}
}
But I would also like the following expression to work:
let foo:Uint = 4242
foo.minutes --> 254520
foo.minutes is NSTimeInterval --> true
This won't work though because I have only extended Int, not UInt. I could redundantly extend Uint, and then UInt16, and then Int16, etc....
I wanted to generalize the extension of Int to IntegerType as shown in the original listing, so that I could just gain the conversions generally for all integer types. And then do the same for FloatingPointType rather than specifically Double. However, that produces the original error. I want to know why I can't extend IntegerType as generally shown. Are there other IntegerType adopters other than the ones shown in the list, that make it so the NSTimeInterval() initializer does not resolve?
I could redundantly extend Uint, and then UInt16, and then Int16, etc....
Correct. That is how it's done today in Swift. You can't declare that a protocol conforms to another protocol in an extension. ("Why?" "Because the compiler doesn't allow it.")
But that doesn't mean you have to rewrite all the implementation. You currently have to implement it three times (four times if you want Float80, but that doesn't seem useful here). First, you declare your protocol.
import Foundation
// Declare the protocol
protocol TimeIntervalConvertible {
func toTimeInterval() -> NSTimeInterval
}
// Add all the helpers you wanted
extension TimeIntervalConvertible {
var seconds:NSTimeInterval {
return NSTimeInterval(self.toTimeInterval())
}
var minutes:NSTimeInterval {
return NSTimeInterval(self.toTimeInterval() * 60)
}
var hours:NSTimeInterval {
return NSTimeInterval(self.toTimeInterval() * 3600)
}
var days:NSTimeInterval {
return NSTimeInterval(self.toTimeInterval() * 3600 * 24)
}
}
// Provide the implementations. FloatingPointType doesn't have an equivalent to
// toIntMax(). There's no toDouble() or toFloatMax(). Converting a Float to
// a Double injects data noise in a way that converting Int8 to IntMax does not.
extension Double {
func toTimeInterval() -> NSTimeInterval { return NSTimeInterval(self) }
}
extension Float {
func toTimeInterval() -> NSTimeInterval { return NSTimeInterval(self) }
}
extension IntegerType {
func toTimeInterval() -> NSTimeInterval { return NSTimeInterval(self.toIntMax()) }
}
// And then we tell it that all the int types can get his implementation
extension Int: TimeIntervalConvertible {}
extension Int8: TimeIntervalConvertible {}
extension Int16: TimeIntervalConvertible {}
extension Int32: TimeIntervalConvertible {}
extension Int64: TimeIntervalConvertible {}
extension UInt: TimeIntervalConvertible {}
extension UInt8: TimeIntervalConvertible {}
extension UInt16: TimeIntervalConvertible {}
extension UInt32: TimeIntervalConvertible {}
extension UInt64: TimeIntervalConvertible {}
This is how number types are currently done in Swift. Look through stdlib. You'll see lots of stuff like:
extension Double {
public init(_ v: UInt8)
public init(_ v: Int8)
public init(_ v: UInt16)
public init(_ v: Int16)
public init(_ v: UInt32)
public init(_ v: Int32)
public init(_ v: UInt64)
public init(_ v: Int64)
public init(_ v: UInt)
public init(_ v: Int)
}
Would it be nice in some cases to talk about "number-like things?" Sure. You can't in Swift today.
"Why?"
Because the compiler doesn't implement it. Some day it may. Until then, create the extension for every type you want it on. A year ago this would have taken even more code.
Note that while some of this is "Swift doesn't have that feature yet," some is also on purpose. Swift intentionally requires explicit conversion between number types. Converting between number types can often lead to losing information or injecting noise, and has historically been a source of tricky bugs. You should be thinking about that every time you convert a number. For example, there's the obvious case that going from Int64 to Int8 or from UInt8 to Int8 can lose information. But going from Int64 to Double can also lose information. Not all 64-bit integers can be expressed as a Double. This is a subtle fact that burns people quite often when dealing with very large numbers, and Swift encourages you to deal with it. Even converting a Float to a Double injects noise into your data. 1/10 expressed as a Float is a different value than 1/10 expressed as a Double. When you convert the Float to a Double, did you mean to extend the repeating digits or not? You'll introduce different kinds of error depending on which you pick, so you need to pick.
Note also that your .days can introduce subtle bugs depending on the exact problem domain. A day is not always 24 hours. It can be 23 hours or 25 hours depending on DST changes. Sometimes that matters. Sometimes it doesn't. But it's a reason to be very careful about treating "days" as though it were a specific number of seconds. Usually if you want to work in days, you should be using NSDate, not NSTimeInterval. I'd be very suspicious of that particular one.
BTW, you may be interested in my old implementation of this idea. Rather than use the syntax:
1.seconds
I used the syntax:
1 * Second
And then overloaded multiplication to return a struct rather than a Double. Returning a struct this way gives much better type safety. For example, I could type-check "time * frequency == cycles" and "cycles / time == frequency", which is something you can't do with Double. Unfortunately NSTimeInterval isn't a separate type; it's just another name for Double. So any method you put on NSTimeInterval is applied to every Double (which is sometimes weird).
Personally I'd probably solve this whole problem this way:
let Second: NSTimeInterval = 1
let Seconds = Second
let Minute = 60 * Seconds
let Minutes = Minute
let Hour = 60 * Minutes
let Hours = Hour
let x = 100*Seconds
You don't even need to overload the operators. It's already done for you.
NSTimeInterval is just an alias for Double. You can easily achieve what you want by defining your own struct, like this:
struct MyTimeInterval {
var totalSecs: NSTimeInterval
var totalHours: NSTimeInterval {
get {
return self.totalSecs / 3600.0
}
set {
self.totalSecs = newValue * 3600.0
}
}
init(_ secs: NSTimeInterval) {
self.totalSecs = secs
}
func totalHourString() -> String {
return String(format: "%.2f hours", arguments: [self.totalHours])
}
}
var t = MyTimeInterval(5400)
print(t.totalHourString())
extension Int {
var seconds: Int {
return self
}
var minutes: Int {
get {
return self * 60
}
}
var hours: Int {
get {
return minutes * 60
}
}
var timeString: String {
get {
let seconds = abs(self) % 60
let minutes = ((abs(self) - seconds) / 60) % 60
let hours = ((abs(self) - seconds) / 3660)
let sign = self < 0 ? "-" :" "
let str = String(format: "\(sign)%2d:%02d:%02d", arguments: [Int(hours),Int(minutes),Int(seconds)])
return str
}
}
}
6.minutes == 360.seconds // true
let t1 = 1.hours + 22.minutes + 12.seconds
let t2 = 12.hours - 15.minutes
let time = t1 - t2
print("t1 =",t1.timeString,t1) // t1 = 1:22:12 4932
print("t2 =",t2.timeString,t2) // t2 = 11:45:00 42300
print("t1-t2 =",time.timeString,time) // t1-t2 = -10:22:48 -37368
(12.hours / 30.minutes) == 24 // true

Is there no default(T) in Swift?

I'm trying to port the Matrix example from Swift book to be generic.
Here's what I got so far:
struct Matrix<T> {
let rows: Int, columns: Int
var grid: T[]
init(rows: Int, columns: Int, repeatedValue: T) {
self.rows = rows
self.columns = columns
grid = Array(count: rows * columns, repeatedValue: repeatedValue)
}
func indexIsValidForRow(row: Int, column: Int) -> Bool {
return row >= 0 && row < rows && column >= 0 && column < columns
}
subscript(row: Int, column: Int) -> T {
get {
assert(indexIsValidForRow(row, column: column), "Index out of range")
return grid[(row * columns) + column]
}
set {
assert(indexIsValidForRow(row, column: column), "Index out of range")
grid[(row * columns) + column] = newValue
}
}
}
Note that I had to pass repeatedValue: T to the constructor.
In C#, I would have just used default(T) which would be 0 for numbers, false for booleans and null for reference types. I understand that Swift doesn't allow nil on non-optional types but I'm still curious if passing an explicit parameter is the only way, or if I there is some equivalent of default(T) there.
There isn't. Swift forces you to specify the default value, just like then you handle variables and fields. The only case where Swift has a concept of default value is for optional types, where it's nil (Optional.None).
An iffy 'YES'. You can use protocol constraints to specify the requirement that your generic class or function will only work with types that implement the default init function (parameter-less). The ramifications of this will most likely be bad (it doesn't work the way you think it does), but it is the closest thing to what you were asking for, probably closer than the 'NO' answer.
For me I found this personally to be helpful during development of a new generic class, and then eventually I remove the constraint and fix the remaining issues. Requiring only types that can take on a default value will limit the usefulness of your generic data type.
public protocol Defaultable
{
init()
}
struct Matrix<Type: Defaultable>
{
let rows: Int
let columns: Int
var grid: [Type]
init(rows: Int, columns: Int)
{
self.rows = rows
self.columns = columns
grid = Array(count: rows * columns, repeatedValue: Type() )
}
}
There is a way to get the equivalent of default(T) in swift, but it's not free and it has an associated hazard:
public func defaultValue<T>() -> T {
let ptr = UnsafeMutablePointer<T>.alloc(1)
let retval = ptr.memory
ptr.dealloc(1)
return retval;
}
Now this is clearly a hack because we don't know if alloc() initializes to something knowable. Is it all 0's? Stuff left over in the heap? Who knows? Furthermore, what it is today could be something different tomorrow.
In fact, using the return value for anything other than a placeholder is dangerous. Let's say that you have code like this:
public class Foo { /* implementation */
public struct Bar { public var x:Foo }
var t = defaultValue<Bar>();
t = someFactoryThatReturnsBar(); // here's our problem
At the problem line, Swift thinks that t has been initialized because that's what Swift's semantics say: you cannot have a variable of a value type that is uninitialized. Except that it is because default<T> breaks those semantics. When you do the assignment, Swift emits a call into the value witness table to destroy the existing type. This will include code that will call release on the field x, because Swift semantics say that instances of objects are never nil. And then you get a runtime crash.
However, I had cause to interoperate with Swift from another language and I had to pass in an optional type. Unfortunately, Swift doesn't provide me with a way to construct an optional at runtime because of reasons (at least I haven't found a way), and I can't easily mock one because optionals are implemented in terms of a generic enum and enums use a poorly documented 5 strategy implementation to pack the payload of an enum.
I worked around this by passing a tuple that I'm going to call a Medusa tuple just for grins: (value: T, present: Bool) which has the contract that if present is true, then value is guaranteed to be valid, invalid otherwise. I can use this safely now to interop:
public func toOptional<T>(optTuple: (value:T, present:Bool)) -> T?
{
if optTuple.present { return optTuple.value }
else { return nil }
}
public func fromOptional<T>(opt: T?) -> (T, Bool)
{
if opt != nil { return (opt!, true) }
else {
return (defaultValue(), false)
}
}
In this way, my calling code passes in a tuple instead of an optional and the receiving code and turn it into an optional (and the reverse).
In fact, generic types can have optional values.
You can do it with protocols
class ViewController: UIViewController {
override func viewDidLoad() {
super.viewDidLoad()
let someType = SomeType<String>()
let someTypeWithDefaultGeneric = SomeType()
}
}
protocol Default {
init()
}
extension Default where Self: SomeType<Void> {
init() {
self.init()
}
}
class SomeType<T>: Default {
required init() {}
}