Why does conforming to Strideable change how multiplication is evaluated? - swift

I created an infinite loop by conforming a Swift struct to the Strideable protocol. I reduced the problem to the following case.
struct T: Strideable {
func advanced(by n: Int) -> T { return T() }
func distance(to other: T) -> Int {
print("Hello")
return self == T() ? 0 : 1
}
}
print(T() == T())
Running this code in a playground results in an unending stream of "Hello"s. If struct T: Strideable is replaced with struct T: Equatable, "true" prints as I would expect.
I suspect that there is a default implementation of Equatable for types conforming to Strideable which is distinct from the auto-synthesized implementation for structs whose members are all equatable. The struct in my project has many members, so I would not like to manually implement a member-wise comparison.
Why does conforming to Strideable change how equality is implemented, and is there a way to recover the expected behavior without manually implementing Equatable?

It's from apple documents:
Important
The Strideable protocol provides default implementations for the equal-to (==) and less-than (<) operators that depend on the Stride type’s implementations. If a type conforming to Strideable is its own Stride type, it must provide concrete implementations of the two operators to avoid infinite recursion.
So either you provide the "==" and "<" implementation like this :
struct Temp: Strideable {
var error = 1
func advanced(by n: Int) -> Temp { return Temp() }
func distance(to other: Temp) -> Int {
print("hello")
return self == other ? 0 : 1
}
static func == (left : T, right: T){
print("great")
}
}
or use some value to manager the stride property. like var location : Int = 0

Related

How does Swift infer Sequence requirements when Self implements IteratorProtocol?

I'm reading the Sequence documentation and in an example that they use (see below) the requirements for Sequence are inferred. My question is how is it inferred. I understand how inferencing works in simpler cases but in this example I cannot understand how Swift infers things.
I see that Sequence's makeIterator method has a default implementation but I don't understand how the return value is inferred here.
Example
struct Countdown: Sequence, IteratorProtocol {
var count: Int
mutating func next() -> Int? {
if count == 0 {
return nil
} else {
defer { count -= 1 }
return count
}
}
}
Reference for types mentioned above
IteratorProtocol {
func next() -> Self.Element?
associatedtype Iterator
associatedtype Element
}
Sequence {
func makeIterator() -> Self.Iterator
associatedtype Iterator
associatedtype Element
}
Let's start with the implementation of IteratorProtocol.next. The compiler sees this implementation:
mutating func next() -> Int? {
if count == 0 {
return nil
} else {
defer { count -= 1 }
return count
}
}
And notices that it returns an Int?. Well, IteratorProtocol.next is supposed to return a Self.Element?, so it infers that IteratorProtocol.Element == Int. Now Coundown satisfies IteratorProtocol.
Note that Sequence and IteratorProtocol share the associated type Element. Once Swift figures out the witness for IteratorProtcol.Element, it's as if you declared a new type alias Element in Countdown, and it just so happens that Sequence requires that Countdown.Element to exist.
After that, the compiler infers Iterator == Self. This is so that the default implementation of makeIterator is available. However, it is quite a mystery how the compiler can infer this, because with only these information, the type can't normally be inferred, as can be shown by creating your own sequence and iterator protocols.
protocol MyIterator {
associatedtype Element
mutating func next() -> Element?
}
protocol MySequence {
associatedtype Element where Element == Iterator.Element
associatedtype Iterator : MyIterator
func makeIterator() -> Iterator
}
extension MySequence where Self == Self.Iterator {
func makeIterator() -> Iterator {
return self
}
}
struct Countdown: MySequence, MyIterator { // doesn't compile
var count: Int
mutating func next() -> Int? {
if count == 0 {
return nil
} else {
defer { count -= 1 }
return count
}
}
}
After looking into the source code, I suspect there might be some compiler magic going on, especially here:
// Provides a default associated type witness for Iterator when the
// Self type is both a Sequence and an Iterator.
extension Sequence where Self: IteratorProtocol {
// #_implements(Sequence, Iterator)
public typealias _Default_Iterator = Self
}
This seems to set a "preferred" type for Iterator to be inferred as. It seems to be saying "When Iterator can't be inferred to be anything, try Self". I can't find _Default_Iterator anywhere else, which is why I concluded it's compiler magic. The whole purpose of this is to allow you to conform to Sequence by only conforming to IteratorProtocol and implementing next, as the documentation said you can do.
Now that Iterator == Self, we have also satisfied the constraint on Element:
associatedtype Element where Self.Element == Self.Iterator.Element
Thus we have shown that Countdown conforms to Sequence.

Swift Comparable Protocol great than method

class Test {
var count: Int;
init(count: Int) {
self.count = count;
}
}
extension Test: Comparable {
static func <(lhs: Test, rhs: Test) -> Bool {
return lhs.count > rhs.count
}
}
When I write this extension everything work okay but when i change < to > compiler error return
Type 'Test' does not conform to protocol 'Equatable'
Comparable extension required write < this function
What is this reason?
If you look at the docs for Comparable, you can see that it inherits from Equatable.
Equatable requires == to be implemented:
static func ==(lhs: Test, rhs: Test) -> Bool {
return lhs.count == rhs.count
}
I should also mention that count does not have an initial value. So you need to add an initializer for Test or a initial value to count.
EDIT:
If you look at the docs for Comparable, you'll find this bit:
Types with Comparable conformance implement the less-than operator (<)
and the equal-to operator (==). These two operations impose a strict
total order on the values of a type, in which exactly one of the
following must be true for any two values a and b:
a == b
a < b
b < a
So you must implement < and == but > is unnecessary. That's why it doesn't work when you only have >.

understanding protocol extensions in swift

I am trying to implement a basic protocol extension like so:
protocol Value {
func get() -> Float
mutating func set(to:Float)
}
extension Value {
static func min(of a:Value, and b:Value) -> Float {
if a < b { //Expression type 'Bool' is ambiguous without more context
return a.get()
}else{
return b.get()
}
}
static func < (a:Value, b:Value) -> Bool {
return a.get() < b.get()
}
}
At the if clause the compiler says:Expression type 'Bool' is ambiguous without more context. Why doesn't this work?
As touched on in this Q&A, there's a difference between operator overloads implemented as static members and operator overloads implemented as top-level functions. static members take an additional (implicit) self parameter, which the compiler needs to be able to infer.
So how is the value of self inferred? Well, it has to be done from either the operands or return type of the overload. For a protocol extension, this means one of those types needs to be Self. Bear in mind that you can't directly call an operator on a type (i.e you can't say (Self.<)(a, b)).
Consider the following example:
protocol Value {
func get() -> Float
}
extension Value {
static func < (a: Value, b: Value) -> Bool {
print("Being called on conforming type: \(self)")
return a.get() < b.get()
}
}
struct S : Value {
func get() -> Float { return 0 }
}
let value: Value = S()
print(value < value) // Ambiguous reference to member '<'
What's the value of self in the call to <? The compiler can't infer it (really I think it should error directly on the overload as it's un-callable). Bear in mind that self at static scope in a protocol extension must be a concrete conforming type; it can't just be Value.self (as static methods in protocol extensions are only available to call on concrete conforming types, not on the protocol type itself).
We can fix both the above example, and your example by defining the overload as a top-level function instead:
protocol Value {
func get() -> Float
}
func < (a: Value, b: Value) -> Bool {
return a.get() < b.get()
}
struct S : Value {
func get() -> Float { return 0 }
}
let value: Value = S()
print(value < value) // false
This works because now we don't need to infer a value for self.
We could have also given the compiler a way to infer the value of self, by making one or both of the parameters take Self:
protocol Value {
func get() -> Float
}
extension Value {
static func < (a: Self, b: Self) -> Bool {
print("Being called on conforming type: \(self)")
return a.get() < b.get()
}
}
struct S : Value {
func get() -> Float { return 0 }
}
let s = S()
print(s < s)
// Being called on conforming type: S
// false
The compiler can now infer self from the static type of operands. However, as said above, this needs to be a concrete type, so you can't deal with heterogenous Value operands (you could work with one operand taking a Value; but not both as then there'd be no way to infer self).
Although note that if you're providing a default implementation of <, you should probably also provide a default implementation of ==. Unless you have a good reason not to, I would also advise you make these overloads take homogenous concrete operands (i.e parameters of type Self), such that they can provide a default implementation for Comparable.
Also rather than having get() and set(to:) requirements, I would advise a settable property requirement instead:
// Not deriving from Comparable could be useful if you need to use the protocol as
// an actual type; however note that you won't be able to access Comparable stuff,
// such as the auto >, <=, >= overloads from a protocol extension.
protocol Value {
var floatValue: Double { get set }
}
extension Value {
static func == (lhs: Self, rhs: Self) -> Bool {
return lhs.floatValue == rhs.floatValue
}
static func < (lhs: Self, rhs: Self) -> Bool {
return lhs.floatValue < rhs.floatValue
}
}
Finally, if Comparable conformance is essential for conformance to Value, you should make it derive from Comparable:
protocol Value : Comparable {
var floatValue: Double { get set }
}
You shouldn't need a min(of:and:) function in either case, as when the conforming type conforms to Comparable, it can use the top-level min(_:_:) function.
You can't write
if a < b {
because a and b have type Value which is NOT Comparable.
However you can compare the float value associated to a and b
if a.get() < b.get() {
If you want to be able to make types that can use operators such as >, <, ==, etc., they have to conform to the Comparable protocol:
protocol Value: Comparable {
func get() -> Float
mutating func set(to: Float)
}
This comes with more restrictions though. You will have to change all the Value types in the protocol extension to Self:
extension Value {
static func min(of a: Self, and b: Self) -> Float {
if a < b { //Expression type 'Bool' is ambiguous without more context
return a.get()
}else{
return b.get()
}
}
static func < (a: Self, b: Self) -> Bool {
return a.get() < b.get()
}
}
The Self types get replaced with the type that implements the protocol. So if I implemented Value on a type Container, the methods signatures would look like this:
class Container: Value {
static func min(of a: Container, and b: Container) -> Float
static func < (a: Container, b: Container) -> Bool
}
As a side note, if you want Value to conform to Comparable, you might want to also add the == operator to the Value extension:
static func <(lhs: Self, rhs: Self) -> Bool {
return lhs.get() < rhs.get()
}

Self in protocol

I am learning swift and playing with Xcode.
and I always dig into the definitions. I have seen that:
public protocol GeneratorType {
typealias Element
#warn_unused_result
public mutating func next() -> Self.Element?
}
A struct that conforming this protocol:
public struct IndexingGenerator<Elements : Indexable> : GeneratorType, SequenceType {
public init(_ elements: Elements)
public mutating func next() -> Elements._Element?
}
I know 'Self' means that returning the conforming type. But what does 'Self.Element' mean?
and the function that implemented the requirement that returning 'Elements._Element?', I can’t see 'Elements._Element?' is equal to 'Self.Element?'.
Can anyone explain to me this?
and tell me more about this. thank you.
Self.Element refers to the concrete type that any type implementing GeneratorType protocol will declare as its Element typealias.
For example, in this generator of Fibonacci numbers:
struct Fibonacci: GeneratorType {
typealias Element = Int
private var value: Int = 1
private var previous: Int = 0
mutating func next() -> Element? {
let newValue = value + previous
previous = value
value = newValue
return previous
}
}
... you implement GeneratorType protocol and indicate what will be its Element typealias (Int in this case), and that's the type that generator's next() will be returning (well, actually the optional of that type).
Quite often, though, you would not have to explicitly specify typealiases when implementing parametrised protocols, as Swift is smart enough to infer them for you. E.g. for the Fibonacci numbers generator from the above example the following will also do:
struct Fibonacci: GeneratorType {
private var value: Int = 1
private var previous: Int = 0
mutating func next() -> Int? {
let newValue = value + previous
previous = value
value = newValue
return previous
}
}
... Swift knows from the signature of next() that it returns Int?, and that GeneratorType implementors also must have next() in their to-do list, and that these methods must return Element? types. So, Swift just puts 2 and 2 together, and infers that Element? must be the same thing as Int?, and therefore Element == Int.
About this:
public struct IndexingGenerator<Elements : Indexable> : GeneratorType, SequenceType {
public init(_ elements: Elements)
public mutating func next() -> Elements._Element?
}
Here we have four things going on:
We declare generic type IndexingGenerator that takes a parameter-type called Elements.
This Elements type has a constraint that it must implement Indexable protocol.
The generator that we implement is supposed to return values of the type that is accessible via Indexable interface of Elements, which is known to IndexingGenerator via dot-syntax as Elements._Element.
Swift infers that Element of IndexingGenerator is the same thing as Elements._Element.
So, essentially the above delclaration is equivalent to:
public struct IndexingGenerator<Elements : Indexable> : GeneratorType, SequenceType {
public typealias Element = Elements._Element
public init(_ elements: Elements)
public mutating func next() -> Element?
}
Finally, if curious why _Element and not just Element like in GeneratorType, here is what they write in the open-source Swift repository (under swift/stdlib/public/core/Collection.swift):
The declaration of _Element and subscript here is a trick used to break a cyclic conformance/deduction that Swift can't handle. We need something other than a CollectionType.Generator.Element that can be used as IndexingGenerator<T>'s Element. Here we arrange for the CollectionType itself to have an Element type that's deducible from its subscript. Ideally we'd like to constrain this Element to be the same as CollectionType.Generator.Element, but we have no way of expressing it today.

How can we create a generic Array Extension that sums Number types in Swift?

Swift lets you create an Array extension that sums Integer's with:
extension Array {
func sum() -> Int {
return self.map { $0 as Int }.reduce(0) { $0 + $1 }
}
}
Which can now be used to sum Int[] like:
[1,2,3].sum() //6
But how can we make a generic version that supports summing other Number types like Double[] as well?
[1.1,2.1,3.1].sum() //fails
This question is NOT how to sum numbers, but how to create a generic Array Extension to do it.
Getting Closer
This is the closest I've been able to get if it helps anyone get closer to the solution:
You can create a protocol that can fulfills what we need to do, i.e:
protocol Addable {
func +(lhs: Self, rhs: Self) -> Self
init()
}
Then extend each of the types we want to support that conforms to the above protocol:
extension Int : Addable {
}
extension Double : Addable {
}
And then add an extension with that constraint:
extension Array {
func sum<T : Addable>(min:T) -> T
{
return self.map { $0 as T }.reduce(min) { $0 + $1 }
}
}
Which can now be used against numbers that we've extended to support the protocol, i.e:
[1,2,3].sum(0) //6
[1.1,2.1,3.1].sum(0.0) //6.3
Unfortunately I haven't been able to get it working without having to supply an argument, i.e:
func sum<T : Addable>(x:T...) -> T?
{
return self.map { $0 as T }.reduce(T()) { $0 + $1 }
}
The modified method still works with 1 argument:
[1,2,3].sum(0) //6
But is unable to resolve the method when calling it with no arguments, i.e:
[1,2,3].sum() //Could not find member 'sum'
Adding Integer to the method signature also doesn't help method resolution:
func sum<T where T : Integer, T: Addable>() -> T?
{
return self.map { $0 as T }.reduce(T()) { $0 + $1 }
}
But hopefully this will help others come closer to the solution.
Some Progress
From #GabrielePetronella answer, it looks like we can call the above method if we explicitly specify the type on the call-site like:
let i:Int = [1,2,3].sum()
let d:Double = [1.1,2.2,3.3].sum()
As of Swift 2 it's possible to do this using protocol extensions. (See The Swift Programming Language: Protocols for more information).
First of all, the Addable protocol:
protocol Addable: IntegerLiteralConvertible {
func + (lhs: Self, rhs: Self) -> Self
}
extension Int : Addable {}
extension Double: Addable {}
// ...
Next, extend SequenceType to add sequences of Addable elements:
extension SequenceType where Generator.Element: Addable {
var sum: Generator.Element {
return reduce(0, combine: +)
}
}
Usage:
let ints = [0, 1, 2, 3]
print(ints.sum) // Prints: "6"
let doubles = [0.0, 1.0, 2.0, 3.0]
print(doubles.sum) // Prints: "6.0"
I think I found a reasonable way of doing it, borrowing some ideas from scalaz and starting from your proposed implementation.
Basically what we want is to have typeclasses that represents monoids.
In other words, we need:
an associative function
an identity value (i.e. a zero)
Here's a proposed solution, which works around the swift type system limitations
First of all, our friendly Addable typeclass
protocol Addable {
class func add(lhs: Self, _ rhs: Self) -> Self
class func zero() -> Self
}
Now let's make Int implement it.
extension Int: Addable {
static func add(lhs: Int, _ rhs: Int) -> Int {
return lhs + rhs
}
static func zero() -> Int {
return 0
}
}
So far so good. Now we have all the pieces we need to build a generic `sum function:
extension Array {
func sum<T : Addable>() -> T {
return self.map { $0 as T }.reduce(T.zero()) { T.add($0, $1) }
}
}
Let's test it
let result: Int = [1,2,3].sum() // 6, yay!
Due to limitations of the type system, you need to explicitly cast the result type, since the compiler is not able to figure by itself that Addable resolves to Int.
So you cannot just do:
let result = [1,2,3].sum()
I think it's a bearable drawback of this approach.
Of course, this is completely generic and it can be used on any class, for any kind of monoid.
The reason why I'm not using the default + operator, but I'm instead defining an add function, is that this allows any type to implement the Addable typeclass. If you use +, then a type which has no + operator defined, then you need to implement such operator in the global scope, which I kind of dislike.
Anyway, here's how it would work if you need for instance to make both Int and String 'multipliable', given that * is defined for Int but not for `String.
protocol Multipliable {
func *(lhs: Self, rhs: Self) -> Self
class func m_zero() -> Self
}
func *(lhs: String, rhs: String) -> String {
return rhs + lhs
}
extension String: Multipliable {
static func m_zero() -> String {
return ""
}
}
extension Int: Multipliable {
static func m_zero() -> Int {
return 1
}
}
extension Array {
func mult<T: Multipliable>() -> T {
return self.map { $0 as T }.reduce(T.m_zero()) { $0 * $1 }
}
}
let y: String = ["hello", " ", "world"].mult()
Now array of String can use the method mult to perform a reverse concatenation (just a silly example), and the implementation uses the * operator, newly defined for String, whereas Int keeps using its usual * operator and we only need to define a zero for the monoid.
For code cleanness, I much prefer having the whole typeclass implementation to live in the extension scope, but I guess it's a matter of taste.
In Swift 2, you can solve it like this:
Define the monoid for addition as protocol
protocol Addable {
init()
func +(lhs: Self, rhs: Self) -> Self
static var zero: Self { get }
}
extension Addable {
static var zero: Self { return Self() }
}
In addition to other solutions, this explicitly defines the zero element using the standard initializer.
Then declare Int and Double as Addable:
extension Int: Addable {}
extension Double: Addable {}
Now you can define a sum() method for all Arrays storing Addable elements:
extension Array where Element: Addable {
func sum() -> Element {
return self.reduce(Element.zero, combine: +)
}
}
Here's a silly implementation:
extension Array {
func sum(arr:Array<Int>) -> Int {
return arr.reduce(0, {(e1:Int, e2:Int) -> Int in return e1 + e2})
}
func sum(arr:Array<Double>) -> Double {
return arr.reduce(0, {(e1:Double, e2:Double) -> Double in return e1 + e2})
}
}
It's silly because you have to say arr.sum(arr). In other words, it isn't encapsulated; it's a "free" function sum that just happens to be hiding inside Array. Thus I failed to solve the problem you're really trying to solve.
3> [1,2,3].reduce(0, +)
$R2: Int = 6
4> [1.1,2.1,3.1].reduce(0, +)
$R3: Double = 6.3000000000000007
Map, Filter, Reduce and more
From my understanding of the swift grammar, a type identifier cannot be used with generic parameters, only a generic argument. Hence, the extension declaration can only be used with a concrete type.
It's doable based on prior answers in Swift 1.x with minimal effort:
import Foundation
protocol Addable {
func +(lhs: Self, rhs: Self) -> Self
init(_: Int)
init()
}
extension Int : Addable {}
extension Int8 : Addable {}
extension Int16 : Addable {}
extension Int32 : Addable {}
extension Int64 : Addable {}
extension UInt : Addable {}
extension UInt8 : Addable {}
extension UInt16 : Addable {}
extension UInt32 : Addable {}
extension UInt64 : Addable {}
extension Double : Addable {}
extension Float : Addable {}
extension Float80 : Addable {}
// NSNumber is a messy, fat class for ObjC to box non-NSObject values
// Bit is weird
extension Array {
func sum<T : Addable>(min: T = T(0)) -> T {
return map { $0 as! T }.reduce(min) { $0 + $1 }
}
}
And here: https://gist.github.com/46c1d4d1e9425f730b08
Swift 2, as used elsewhere, plans major improvements, including exception handling, promises and better generic metaprogramming.
Help for anyone else struggling to apply the extension to all Numeric values without it looking messy:
extension Numeric where Self: Comparable {
/// Limits a numerical value.
///
/// - Parameter range: The range the value is limited to be in.
/// - Returns: The numerical value clipped to the range.
func limit(to range: ClosedRange<Self>) -> Self {
if self < range.lowerBound {
return range.lowerBound
} else if self > range.upperBound {
return range.upperBound
} else {
return self
}
}
}