What does "existential type" mean in Swift? - swift

I am reading Swift Evolution proposal 244 (Opaque Result Types) and don't understand what the following means:
... existential type ...
One could compose these transformations by
using the existential type Shape instead of generic arguments, but
doing so would imply more dynamism and runtime overhead than may be
desired.

An example of an existential type is given in the evolution proposal itself:
protocol Shape {
func draw(to: Surface)
}
An example of using protocol Shape as an existential type would look like
func collides(with: any Shape) -> Bool
as opposed to using a generic argument Other:
func collides<Other: Shape>(with: Other) -> Bool
Important to note here that the Shape protocol is not an existential type by itself, only using it in "protocols-as-types" context as above "creates" an existential type from it. See this post from the member of Swift Core Team:
Also, protocols currently do double-duty as the spelling for existential types, but this relationship has been a common source of confusion.
Also, citing the Swift Generics Evolution article (I recommend reading the whole thing, which explains this in more details):
The best way to distinguish a protocol type from an existential type is to look at the context. Ask yourself: when I see a reference to a protocol name like Shape, is it appearing at a type level, or at a value level? Revisiting some earlier examples, we see:
func addShape<T: Shape>() -> T
// Here, Shape appears at the type level, and so is referencing the protocol type
var shape: any Shape = Rectangle()
// Here, Shape appears at the value level, and so creates an existential type
Deeper dive
Why is it called an "existential"? I never saw an unambiguous confirmation of this, but I assume that the feature is inspired by languages with more advanced type systems, e.g. consider Haskell's existential types:
class Buffer -- declaration of type class `Buffer` follows here
data Worker x y = forall b. Buffer b => Worker {
buffer :: b,
input :: x,
output :: y
}
which is roughly equivalent to this Swift snippet (if we assume that Swift's protocols more or less represent Haskell's type classes):
protocol Buffer {}
struct Worker<X, Y> {
let buffer: any Buffer
let input: X
let output: Y
}
Note that the Haskell example used forall quantifier here. You could read this as "for all types that conform to the Buffer type class ("protocol" in Swift) values of type Worker would have exactly the same types as long as their X and Y type parameters are the same". Thus, given
extension String: Buffer {}
extension Data: Buffer {}
values Worker(buffer: "", input: 5, output: "five") and Worker(buffer: Data(), input: 5, output: "five") would have exactly the same types.
This is a powerful feature, which allows things such as heterogenous collections, and can be used in a lot more places where you need to "erase" an original type of a value and "hide" it under an existential type. Like all powerful features it can be abused and can make code less type-safe and/or less performant, so should be used with care.
If you want even a deeper dive, check out Protocols with Associated Types (PATs), which currently can't be used as existentials for various reasons. There are also a few Generalized Existentials proposals being pitched more or less regularly, but nothing concrete as of Swift 5.3. In fact, the original Opaque Result Types proposal linked by the OP can solve some of the problems caused by use of PATs and significantly alleviates lack of generalized existentials in Swift.

I'm sure you've already used existentials a lot before without noticing it.
A summarized answer of Max is that for:
var rec: Shape = Rectangle() // Example A
only Shape properties can be accessed. While for:
func addShape<T: Shape>() -> T // Example B
Any property of T can be accessed. Because T adopts Shape then all properties of Shape can also be accessed as well.
The first example is an existential the second is not.
Example of real code:
protocol Shape {
var width: Double { get }
var height: Double { get }
}
struct Rectangle: Shape {
var width: Double
var height: Double
var area: Double
}
let rec1: Shape = Rectangle(width: 1, height: 2, area: 2)
rec1.area // ❌
However:
let rec2 = Rectangle(width: 1, height: 2, area: 2)
func addShape<T: Shape>(_ shape: T) -> T {
print(type(of: shape)) // Rectangle
return shape
}
let rec3 = addShape(rec2)
print(rec3.area) // ✅
I'd argue that for most Swift users, we all understand Abstract class and Concrete class. This extra jargon makes it slightly confusing.
The trickiness is that with the 2nd example, to the compiler, the type you return isn't Shape, it's Rectangle i.e. the function signature transforms to this:
func addShape(_ shape: Rectangle) -> Rectangle {
This is only possible because of (constrained) generics.
Yet for rec: Shape = Whatever() to the compiler the type is Shape regardless of the assigning type. <-- Box Type
Why is it named Existential?
The term "existential" in computer science and programming is borrowed from philosophy, where it refers to the concept of existence and being. In the context of programming, "existential" is used to describe a type that represents the existence of any specific type, without specifying which type it is.
The term is used to reflect the idea that, by wrapping a value in an existential type, you are abstracting away its specific type and only acknowledging its existence.
In other words, an existential type provides a way to handle values of different types in a unified way, while ignoring their specific type information†. This allows you to work with values in a more generic and flexible manner, which can be useful in many situations, such as when creating collections of heterogeneous values, or when working with values of unknown or dynamic types.
The other day I took my kid to an ice-cream shop. She asked what are you having, and I didn't know the flavor I picked, so I didn't say it's strawberry flavored or chocolate, I just said "I'm having an ice-cream".
I just specified that it's an ice-cream without saying its flavor. My daughter could no longer determine if it was Red, or Brown. If it was having a fruit flavor or not. I gave her existential-like information.
Had I told her it's a chocolate, then I would have gave her specific information. Which is then not existential.
†: In Example B, we're not ignoring the specific type information.
Special thanks to a friend who helped me come up with this answer

I feel like it's worth adding something about why the phrase is important in Swift. And in particular, I think almost always, Swift is talking about "existential containers". They talk about "existential types", but only really with reference to "stuff that is stored in an existential container". So what is an "existential container"?
As I see it, the key thing is, if you have a variable that you're passing as a parameter or using locally, etc. and you define the type of the variable as Shape then Swift has to do some things under the hood to make it work and that's what they are (obliquely) referring to.
If you think about defining a function in a library/framework module that you're writing that is publicly available and takes for example the parameters public func myFunction(shape1: Shape, shape2: Shape, shape1Rotation: CGFloat?) -> Shape... imagine it (optionally) rotates shape1, "adds" it to shape2 somehow (I leave the details up to your imagination) then returns the result. Coming from other OO languages, we instinctively think that we understand how this works... the function must be implemented only with members available in the Shape protocol.
But the question is for the compiler, how are the parameters represented in memory? Instinctively, again, we think... it doesn't matter. When someone writes a new program that uses your function at some point in the future, they decide to pass their own shapes in and define them as class Dinosaur: Shape and class CupCake: Shape. As part of defining those new classes, they will have to write implementations of all the methods in protocol Shape, which might be something like func getPointsIterator() -> Iterator<CGPoint>. That works just fine for classes. The calling code defines those classes, instantiates objects from them, passes them into your function. Your function must have something like a vtable (I think Swift calls it a witness table) for the Shape protocol that says "if you give me an instance of a Shape object, I can tell you exactly where to find the address of the getPointsIterator function". The instance pointer will point to a block of memory on the stack, the start of which is a pointer to the class metadata (vtables, witness tables, etc.) So the compiler can reason about how to find any given method implementation.
But what about value types? Structs and enums can have just about any format in memory, from a one byte Bool to a 500 byte complex nested struct. These are usually passed on the stack or registers on function calls for efficiency. (When Swift exactly knows the type, all code can be compiled knowing the data format and passed on the stack or in registers, etc.)
Now you can see the problem. How can Swift compile the function myFunction so it will work with any possible future value type/struct defined in any code? As I understand it, this is where "existential containers" come in.
The simplest approach would be that any function that takes parameters of one of these "existential types" (types defined just by conforming to a Protocol) must insist that the calling code "box" the value type... that it store the value in a special reference counted "box" on the heap and pass a pointer to this (with all the usual ARC retain/release/autorelease/ownership rules) to your function when the function takes a parameter of type Shape.
Then when a new, weird and wonderful, type is written by some code author in the future, compiling the methods of Shape would have to include a way to accept "boxed" versions of the type. Your myFunction would always handle these "existential types" by handling the box and everything works. I would guess that C# and Java do something like this (boxing) if they have the same problem with non class types (Int, etc.)?
The thing is that for a lot of value types, this can be very inefficient. After all, we are compiling mostly for 64 bit architecture, so a couple of registers can handle 8 bytes, enough for many simple structures. So Swift came up with a compromise (again I might be a bit inaccurate on this, I'm giving my idea of the mechanism... feel free to correct). They created "existential containers" that are always 4 pointers in size. 16 bytes on a "normal" 64 bit architecture (most CPUs that run Swift these days).
If you define a struct that conforms to a protocol and it contains 12 bytes or less, then it is stored in the existential container directly. The last 4 byte pointer is a pointer to the type information/witness tables/etc. so that myFunction can find an address for any function in the Shape protocol (just like in the classes case above). If your struct/enum is larger than 12 bytes then the 4 pointer value points to a boxed version of the value type. Obviously this was considered an optimum compromise, and seems reasonable... it will be passed around in 4 registers in most cases or 4 stack slots if "spilled".
I think the reason the Swift team end up mentioning "existential containers" to the wider community is because it then has implications for various ways of using Swift. One obvious implication is performance. There's a sudden performance drop when using functions in this way if the structs are > 12 bytes in size.
Another, more fundamental, implication I think is that protocols can be used as parameters only if they don't have protocol or Self requirements... they are not generic. Otherwise you're into generic function definitions which is different. That's why we sometimes need to change things like: func myFunction(shape: Shape, reflection: Bool) -> Shape into something like func myFunction<S:Shape>(shape: S, reflection: Bool) -> S. They are implemented in very different ways under the hood.

Related

Swift Generics... Checking conformance to protocol with associated type

I'm trying to write a generic function in Swift that takes any number, Int, Float, Double, etc. by setting the generic type to <T: Numeric>. So,
func doSomething<T: Numeric>(with foo: T, and bar: T) -> T {
// body
}
Most of the body will work for any Numeric type, but along the way, I need to find the remainder... which means I need a different approach for FloatingPoint types vs. Integer types.
When T is an Int, this means using the modulo operator:
let remainder = foo % bar
However, when T is a Double or Float, the modulo operator doesn't work and I need to use the truncatingRemainder(dividingBy:) method:
let remainder = foo.truncatingRemainder(dividingBy: bar)
Where I'm struggling is to find a way to sift these out. In theory, I should just be able to do something like this:
var remainder: T
if T.self as FloatingPoint { // <- This doesn't work
remainder = foo.truncatingRemainder(dividingBy: bar)
} else {
remainder = foo % bar
}
This, of course, leads to this error, since FloatingPoint has an associated type:
error: protocol 'FloatingPoint' can only be used as a generic constraint because it has Self or associated type requirements
I understand this error... essentially, FloatingPoint is generic with a still-generic associated type for the adopting type to define.
However, I would like to know the best way to conditionally run select blocks of code that only apply to some more narrowly-defined Protocol than defined with the generic param (T).
Specifically, is there a way to define a single generic function to handle all Numeric types, then differentiate by FloatingPoint vs. Integer types within.
There are a couple issues at foot here.
Numeric is the wrong protocol if you're looking to take remainders.
Unless I am misreading the documentation for Numeric, a matrix type could reasonably conform to Numeric. If a matrix were passed into your function, you would have no real way to compute remainders because that's not a well-defined notion. Consequently, your function shouldn't be defined on all Numeric types. The solution is to define a new protocol which describes types with well-defined remainders. Unfortunately, as Alexander notes in the comments...
Swift will not allow you to extend a protocol to conform to another protocol.
There are various technical reasons for this restriction, mostly centering around difficulties when a type would conform to a protocol in multiple ways.
I think you have two reasonable solutions.
A. Make two different overloads of doSomething, one with T: FloatingPoint and the other with T: BinaryInteger. If there is too much shared code between these implementations, you could refactor the doSomething into multiple functions, most of which would be defined on all of Numeric.
B. Introduce a new protocol RemainderArithmetic: Numeric which describes the operations you want. Then, write explicit conformances for all the particular types you want to use e.g. extension Double: RemainderArithmetic and extension UInt: RemainderArithmetic.
Neither of these solutions are particularly appealing, but I think they have one key advantage: Both of these solutions make clear the particular semantics of the types you are expecting. You are not really anticipating types other than BinaryInteger's or FloatingPoint's, so you shouldn't accept other types. The semantics of remainders can be extremely tricky as evidenced by the wide range of behaviors described on the wikipedia page for mod. Therefore, it probably isn't very natural to be defining a function the same way across integers and floating points. If you are certain that is what you want to do, both of these solutions make your assumptions about what types you are expecting explicit.
If you aren't satisfied by either of these solutions and can provide more details about what exactly you're trying to do, we might be able to find something else.
This doesn't sound like a good use case for a generic. You'll notice that operators like + are not generic in Swift. They use overloading, and so should you.

Tuple vs. Object in Swift

I have read about the differences in Tuples and Dictionaries / Arrays, but I have yet to come across a post on Stack Overflow explaining the difference between a Tuple and an Object in Swift.
The reason I ask is that from experience, it seems that a Tuple could be interchangeable with an Object in Swift in many circumstances (especially in those where the object only holds other objects or data), but could lead to inconsistent / messy code.
In Swift, is there a time to use a Tuple and a time to use a basic Object based on performance or coding methodologies?
As vadian notes, Apple's advice is that tuples only be used for temporary values. this plays out. If you need to do almost anything non-trivial with a data structure, including store it in a property, you probably do not want a tuple. They're very limited.
I'd avoid the term "object" in this discussion. That's a vague, descriptive term that doesn't cleanly map to any particular data structure. The correct way to think of a tuple is as being in contrast to a struct. In principle, a tuple is just an anonymous struct, but in Swift a tuple is dramatically less flexible than a struct. Most significantly, you cannot add extensions to a tuple, and adding extensions is a core part of Swift programming.
Basically, about the time you're thinking that you need to label the fields of the tuple, you probably should be using a struct instead. Types as simple as "a point" are modeled as structs, not tuples.
So when would you ever use a tuple? Consider the follow (non-existent) method on Collection:
extension Collection {
func headTail() -> (Element?, SubSequence) {
return (first, dropFirst())
}
}
This is a good use of a tuple. It would be unhelpful to invent a special struct just to return this value, and callers will almost always want to destructure this anyway like:
let (head, tail) = list.headTail()
This is one thing that tuples can do that structs cannot (at least today; there is ongoing discussion of adding struct destructuring and pattern matching to Swift).
In Swift, Tuple is a Compound Type that holds some properties together which are built up from Objects of Swift Named Types for example class, struct and enum.
I would analogize Objects of these Named Types as minerals of chemical elements ( like carbon, calcium) and Tuple is just a kind of physical mixture of these minerals( eg a pack of 1 part of calcium ore and 3 parts of carbon ore). You can easily carry around this packed tuple and add it to “heat or press” method to return “limestone” your app use in construction.

Does Scala have a value restriction like ML, if not then why?

Here’s my thoughts on the question. Can anyone confirm, deny, or elaborate?
I wrote:
Scala doesn’t unify covariant List[A] with a GLB ⊤ assigned to List[Int], bcz afaics in subtyping “biunification” the direction of assignment matters. Thus None must have type Option[⊥] (i.e. Option[Nothing]), ditto Nil type List[Nothing] which can’t accept assignment from an Option[Int] or List[Int] respectively. So the value restriction problem originates from directionless unification and global biunification was thought to be undecidable until the recent research linked above.
You may wish to view the context of the above comment.
ML’s value restriction will disallow parametric polymorphism in (formerly thought to be rare but maybe more prevalent) cases where it would otherwise be sound (i.e. type safe) to do so such as especially for partial application of curried functions (which is important in functional programming), because the alternative typing solutions create a stratification between functional and imperative programming as well as break encapsulation of modular abstract types. Haskell has an analogous dual monomorphisation restriction. OCaml has a relaxation of the restriction in some cases. I elaborated about some of these details.
EDIT: my original intuition as expressed in the above quote (that the value restriction may be obviated by subtyping) is incorrect. The answers IMO elucidate the issue(s) well and I’m unable to decide which in the set containing Alexey’s, Andreas’, or mine, should be the selected best answer. IMO they’re all worthy.
As I explained before, the need for the value restriction -- or something similar -- arises when you combine parametric polymorphism with mutable references (or certain other effects). That is completely independent from whether the language has type inference or not or whether the language also allows subtyping or not. A canonical counter example like
let r : ∀A.Ref(List(A)) = ref [] in
r := ["boo"];
head(!r) + 1
is not affected by the ability to elide the type annotation nor by the ability to add a bound to the quantified type.
Consequently, when you add references to F<: then you need to impose a value restriction to not lose soundness. Similarly, MLsub cannot get rid of the value restriction. Scala enforces a value restriction through its syntax already, since there is no way to even write the definition of a value that would have polymorphic type.
It's much simpler than that. In Scala values can't have polymorphic types, only methods can. E.g. if you write
val id = x => x
its type isn't [A] A => A.
And if you take a polymorphic method e.g.
def id[A](x: A): A = x
and try to assign it to a value
val id1 = id
again the compiler will try (and in this case fail) to infer a specific A instead of creating a polymorphic value.
So the issue doesn't arise.
EDIT:
If you try to reproduce the http://mlton.org/ValueRestriction#_alternatives_to_the_value_restriction example in Scala, the problem you run into isn't the lack of let: val corresponds to it perfectly well. But you'd need something like
val f[A]: A => A = {
var r: Option[A] = None
{ x => ... }
}
which is illegal. If you write def f[A]: A => A = ... it's legal but creates a new r on each call. In ML terms it would be like
val f: unit -> ('a -> 'a) =
fn () =>
let
val r: 'a option ref = ref NONE
in
fn x =>
let
val y = !r
val () = r := SOME x
in
case y of
NONE => x
| SOME y => y
end
end
val _ = f () 13
val _ = f () "foo"
which is allowed by the value restriction.
That is, Scala's rules are equivalent to only allowing lambdas as polymorphic values in ML instead of everything value restriction allows.
EDIT: this answer was incorrect before. I have completely rewritten the explanation below to gather my new understanding from the comments under the answers by Andreas and Alexey.
The edit history and the history of archives of this page at archive.is provides a recording of my prior misunderstanding and discussion. Another reason I chose to edit rather than delete and write a new answer, is to retain the comments on this answer. IMO, this answer is still needed because although Alexey answers the thread title correctly and most succinctly—also Andreas’ elaboration was the most helpful for me to gain understanding—yet I think the layman reader may require a different, more holistic (yet hopefully still generative essence) explanation in order to quickly gain some depth of understanding of the issue. Also I think the other answers obscure how convoluted a holistic explanation is, and I want naive readers to have the option to taste it. The prior elucidations I’ve found don’t state all the details in English language and instead (as mathematicians tend to do for efficiency) rely on the reader to discern the details from the nuances of the symbolic programming language examples and prerequisite domain knowledge (e.g. background facts about programming language design).
The value restriction arises where we have mutation of referenced1 type parametrised objects2. The type unsafety that would result without the value restriction is demonstrated in the following MLton code example:
val r: 'a option ref = ref NONE
val r1: string option ref = r
val r2: int option ref = r
val () = r1 := SOME "foo"
val v: int = valOf (!r2)
The NONE value (which is akin to null) contained in the object referenced by r can be assigned to a reference with any concrete type for the type parameter 'a because r has a polymorphic type a'. That would allow type unsafety because as shown in the example above, the same object referenced by r which has been assigned to both string option ref and int option ref can be written (i.e. mutated) with a string value via the r1 reference and then read as an int value via the r2 reference. The value restriction generates a compiler error for the above example.
A typing complication arises to prevent3 the (re-)quantification (i.e. binding or determination) of the type parameter (aka type variable) of a said reference (and the object it points to) to a type which differs when reusing an instance of said reference that was previously quantified with a different type.
Such (arguably bewildering and convoluted) cases arise for example where successive function applications (aka calls) reuse the same instance of such a reference. IOW, cases where the type parameters (pertaining to the object) for a reference are (re-)quantified each time the function is applied, yet the same instance of the reference (and the object it points to) being reused for each subsequent application (and quantification) of the function.
Tangentially, the occurrence of these is sometimes non-intuitive due to lack of explicit universal quantifier ∀ (since the implicit rank-1 prenex lexical scope quantification can be dislodged from lexical evaluation order by constructions such as let or coroutines) and the arguably greater irregularity (as compared to Scala) of when unsafe cases may arise in ML’s value restriction:
Andreas wrote:
Unfortunately, ML does not usually make the quantifiers explicit in its syntax, only in its typing rules.
Reusing a referenced object is for example desired for let expressions which analogous to math notation, should only create and evaluate the instantiation of the substitutions once even though they may be lexically substituted more than once within the in clause. So for example, if the function application is evaluated as (regardless of whether also lexically or not) within the in clause whilst the type parameters of substitutions are re-quantified for each application (because the instantiation of the substitutions are only lexically within the function application), then type safety can be lost if the applications aren’t all forced to quantify the offending type parameters only once (i.e. disallow the offending type parameter to be polymorphic).
The value restriction is ML’s compromise to prevent all unsafe cases while also preventing some (formerly thought to be rare) safe cases, so as to simplify the type system. The value restriction is considered a better compromise, because the early (antiquated?) experience with more complicated typing approaches that didn’t restrict any or as many safe cases, caused a bifurcation between imperative and pure functional (aka applicative) programming and leaked some of the encapsulation of abstract types in ML functor modules. I cited some sources and elaborated here. Tangentially though, I’m pondering whether the early argument against bifurcation really stands up against the fact that value restriction isn’t required at all for call-by-name (e.g. Haskell-esque lazy evaluation when also memoized by need) because conceptually partial applications don’t form closures on already evaluated state; and call-by-name is required for modular compositional reasoning and when combined with purity then modular (category theory and equational reasoning) control and composition of effects. The monomorphisation restriction argument against call-by-name is really about forcing type annotations, yet being explicit when optimal memoization (aka sharing) is required is arguably less onerous given said annotation is needed for modularity and readability any way. Call-by-value is a fine tooth comb level of control, so where we need that low-level control then perhaps we should accept the value restriction, because the rare cases that more complex typing would allow would be less useful in the imperative versus applicative setting. However, I don’t know if the two can be stratified/segregated in the same programming language in smooth/elegant manner. Algebraic effects can be implemented in a CBV language such as ML and they may obviate the value restriction. IOW, if the value restriction is impinging on your code, possibly it’s because your programming language and libraries lack a suitable metamodel for handling effects.
Scala makes a syntactical restriction against all such references, which is a compromise that restricts for example the same and even more cases (that would be safe if not restricted) than ML’s value restriction, but is more regular in the sense that we’ll not be scratching our head about an error message pertaining to the value restriction. In Scala, we’re never allowed to create such a reference. Thus in Scala, we can only express cases where a new instance of a reference is created when it’s type parameters are quantified. Note OCaml relaxes the value restriction in some cases.
Note afaik both Scala and ML don’t enable declaring that a reference is immutable1, although the object they point to can be declared immutable with val. Note there’s no need for the restriction for references that can’t be mutated.
The reason that mutability of the reference type1 is required in order to make the complicated typing cases arise, is because if we instantiate the reference (e.g. in for example the substitutions clause of let) with a non-parametrised object (i.e. not None or Nil4 but instead for example a Option[String] or List[Int]), then the reference won’t have a polymorphic type (pertaining to the object it points to) and thus the re-quantification issue never arises. So the problematic cases are due to instantiation with a polymorphic object then subsequently assigning a newly quantified object (i.e. mutating the reference type) in a re-quantified context followed by dereferencing (reading) from the (object pointed to by) reference in a subsequent re-quantified context. As aforementioned, when the re-quantified type parameters conflict, the typing complication arises and unsafe cases must be prevented/restricted.
Phew! If you understood that without reviewing linked examples, I’m impressed.
1 IMO to instead employ the phrase “mutable references” instead of “mutability of the referenced object” and “mutability of the reference type” would be more potentially confusing, because our intention is to mutate the object’s value (and its type) which is referenced by the pointer— not referring to mutability of the pointer of what the reference points to. Some programming languages don’t even explicitly distinguish when they’re disallowing in the case of primitive types a choice of mutating the reference or the object they point to.
2 Wherein an object may even be a function, in a programming language that allows first-class functions.
3 To prevent a segmentation fault at runtime due to accessing (read or write of) the referenced object with a presumption about its statically (i.e. at compile-time) determined type which is not the type that the object actually has.
4 Which are NONE and [] respectively in ML.

Please provide a practical example of how/why you use the SinkOf and SinkType in Swift (part of standard library)?

These are used heavily in the Swift implementation of ReactiveCocoa and any other functional reactive library I bump into so appear to be of interest from that perspective.
It essentially appears to be a struct wrapping a generic value, but this is obviously too simplistic an interpretation. The types have some comments in the swift standard library but I found them a little too vague and google yields little.
I think it helps to think of sinks as the counterpart of generators.
A generator is basically a function of (Void -> T), where T is the type of the thing you're generating. GeneratorType is a protocol that allows structs and classes, etc to act as generators, by giving that function a name: next() -> T. This is handy, because the generator function takes no arguments, so to produce a useful sequence of values, you need a place to keep track of some state between invocations. Generator<T> is a generic base type that conforms to the GeneratorType protocol, and you can inherit from it to implement your own GeneratorType.
So, back to sinks. A sink is the inverse of a generator, meaning it's a function of (T -> Void). It also has an associated protocol SinkType, which lets structs and classes act as sinks by defining the sink function as put(T). And there's a base implementation SinkOf<T>, which you can use directly without subtyping by passing in an implementation of the put function as a closure. Since the sink function returns Void, it's less likely to need internal state, but if you do, you can inherit from SinkOf or implement SinkType on your own class/struct.
So SinkOf doesn't exactly wrap a generic value so much as it wraps a function that takes a generic value and does something with it. In the forthcoming ReactiveCocoa 3 rewrite, they're used to replace the RAC 2 concept of a "subscriber".
In RAC 2, a signal outputs events by calling the sendNext:, sendError:, and sendCompleted methods of an object that implements the RACSubscriber protocol. RAC 3 replaces the separate methods with a sink of Events, where an Event<T,ErrorType> is an enum that can be one of the cases .Next(T), .Error(ErrorType), and .Completed, plus the new .Interrupted case.
Wrapping up all the event types in a single enumeration makes it clearer what a signal is all about: it's a stream of events. Events arrive one at a time, and can be one of four types (three in RAC2). Without swift's enum type, those events had to be dispatched separately, since the type of value associated with an event is different for each event type. With enums that can carry values of different types, all you need to implement RACSubscriber is a place to put the Event, or in other words, a sink.
That still leaves the question of why to bother having a SinkType instead of just passing around closures of type (T -> Void). The docs only really talk about the compiler optimization benefits, but I personally think it's useful because it gives you a name to work with. If I see a closure of (T -> Void), I have to briefly pause to parse out the types, and if I'm passing them around all the time there's a lot of parens and arrows cluttering up my definitions.
Plus, it's a nice conceptual match if you're working with generators. Since a RAC Signal is conceptually a generator of events, it makes sense to talk in terms of sinks, which are the generator's natural complement.

Recreating the Functor type in Swift

I'm wanting to make the Functor type in Swift as a protocol, but I'm having trouble with the type clauses.
protocol Functor {
typealias A
func fmap<
B, C
where
C == Self
C.A == B>
(A -> B) -> C
}
The problem is that C == Self sort of implies that C is the same as Self, and therefore has the same A. Is there any way to define a generic that is the same type but with a different type parameter?
EDIT: Maybe I should clarify that the goal here is to function like fmap in other functional languages, except self is the in parameter, instead of fmap being a global function that takes an in parameter.
No, not today. Swift lacks higher-kinded types. It doesn't even have support for first-order types. So as a first step, you'd want to be able to talk about let x: Array<_> where you didn't yet know the type of the contained object. Swift can't handle that. It can't handle generic closures, either. For example:
func firstT<T>(xs: [T]) -> T {
return xs[0]
}
let ft = firstT // "Argument for generic parameter 'T' could not be inferred"
Those are the simple cases, and Swift's type system already starts falling apart (it can handle it as a function definition, but not as a type). Swift's type system completely falls apart when asked to represent higher-kinded types like Functor.
A little deeper discussion of the issue (but mostly reiterates what I've said here: https://gist.github.com/rnapier/5a84f65886e1ee40f62e)
Note that this isn't specifically a feature so much of "other functional languages." First, Swift isn't really a functional language; it just has some functional features, and second, not all functional languages have this feature (Haskell and Scala, yes, ML and LISP, no). The feature you want is higher-kinded types, and that can exist with or without other functional features.
A nice intro to this is available in section 4.3 Higher-kinded types of Scala for generic programmers. Swift has some very interesting similarities to Scala, so the history of Scala can provide some hints about what might make sense for Swift (Higher-kinded types didn't show up until Scala 2.5).