What is a kind projector - scala

I've been digging into FP and everything that surrounds it, and I found the concept of kind projector written somewhere, without details nor explanations.
The only thing I found was this github project, and I'm starting to think if it was referring to this particular project, or to some generic concept in FP?
So, what is a kind projector? Why is it useful? (if possible, can you provide examples, resources, etc?)

This is indeed just a slightly awkward name for the specific plugin for the Scala compiler you linked to. I don't think it has any significance to itself, but it kind of fits its purpose.
What the plugin does is to provide an alternative syntax to Scala's usual workaround for type lambdas, which uses a language feature called type projections.
Say you wanted to implement Functor for Either. Now, Functor requires kind * -> *, whereas Either has kind * -> * -> *. So we need to first fix the first argument, and can then provide the implementation for the partially applied type constructor. The only way you can do this in "regular" Scala is this:
implicit def eitherIsFunctor[A]: Functor[{type λ[X] = Either[A, X]}#λ] = { ... }
where {type λ[X] = Either[A, X]} is an anonymous structural type, which is only immediately used to "project out" λ, the type we actually want. In Haskell, you could just say
instance Functor (Either a) where ...
where Either is partially applied (and a is quantified over automatically).
The plugin allows one to replace the projection with something that looks more like a usual partial application in Scala, namely Either[A, ?], instead of the hardly understandable {type λ[X] = Either[A, X]}#λ (and also provides general type lambdas, I think, always by converting them down to anonymous types and projections).

Scala 3 provides native type lambdas which are no longer based on type projection
A type lambda such as [X] =>> F[X] defines a function from types to
types.
For example,
trait Functor[F[_]]
new Functor[Either[String, Int]] {} // error
new Functor[({ type λ[X] = Either[String, X] })#λ] {} // Scala 2 type lambda based on type projection
new Functor[λ[X => Either[String, X]]] {} // kind projector type lambda
new Functor[Either[String, *]] {} // kind projector type lambda
new Functor[[X] =>> Either[String, X]] {} // Scala 3 type lambda
Also, there exists a proposal SIP: Underscore Syntax for Type Lambdas #5379 such that
Functor[Either[String, _]] // equivalent to Functor[[X] =>> Either[String, X]]

Related

Why Scala Infer the Bottom Type when the type parameter is not specified?

I wonder if anyone could explain the inferencing rule in this particular case below, and most importantly it's rational/implication ?
case class E[A, B](a: A) // class E
E(2) // E[Int,Nothing] = E(2)
Note that I could have wrote E[Int](2). What matter to me is why is the second parameter type inferred to be Nothing (i.e. Bottom type) instead of let say Any for instance ? Why is that and What's the rational/Implication ?
Just to give some context, this is related to the definition of Either and how it works for Left and Right. Both are defined according to the pattern
final case class X[+A, +B](value: A) extends Either[A, B]
Where you instantiate it let say as Right[Int](2) and the type inferred is Right[Nothing, Int] and by extension Either[Nothing, Int]
EDIT1
There is consistency here, but i still can figure out the rational. Below is the same definition with a contra-variant paramete:
case class E[A, -B](a: A)// class E
E(2) // E[Int, Any] = E(2)
Hence we do have the same thing the other way around when it is contra-variant, and that make the all behavior or inference rule, coherent. However the rational for this i am not sure ....
Why not the opposite rule i.e. infer Any when Co-Variant/Invariant and Nothing when Contra-Variant ?
EDIT2
In the light of #slouc Answer, which make good sense, i'm left with still understanding what and why the compiler is doing what it is doing. The example below illustrate my confusion
val myleft = Left("Error") // Left[String,Nothing] = Left(Error)
myleft map { (e:Int) => e * 4} // Either[String,Int] = Left(Error)
First the compiler fix the type to something that "for sure work" to reuse the conclusion of #slouc (albeit make more sense in the context of a Function) Left[String,Nothing]
Next the compile infer myleft to be of type Either[String,Int]
given map definition def map[B](f: A => B): Either[E, B], (e:Int) => e * 4 can only be supplied if myleft is actually Left[String,Int] or Either[String,Int]
So in other words, my question is, what is the point of fixing the type to Nothing if it is to change it later.
Indeed the following does not compile
val aleft: Left[String, Nothing] = Left[String, Int]("Error")
type mismatch;
found : scala.util.Left[String,Int]
required: Left[String,Nothing]
val aleft: Left[String, Nothing] = Left[String, Int]("Error")
So why would I infer to a type, that normally would block me to do anything else over variable of that type (but for sure works in term of inference), to ultimately change that type, so i can do something with a variable of that inferred type.
EDIT3
Edit2 is a bit misunderstanding and everything is clarified in #slouc answer and comments.
Covariance:
Given type F[+A] and relation A <: B, then the following holds: F[A] <: F[B]
Contravariance:
Given type F[-A] and relation A <: B, then the following holds: F[A] >: F[B]
If the compiler cannot infer the exact type, it will resolve the lowest possible type in case of covariance and highest possible type in case of contravariance.
Why?
This is a very important rule when it comes to variance in subtyping. It can be shown on the example of the following data type from Scala:
trait Function1[Input-, Output+]
Generally speaking, when a type is placed in the function/method parameters, it means it's in the so-called "contravariant position". If it's used in function/method return values, it's in the so-called "covariant position". If it's in both, then it's invariant.
Now, given the rules from the beginning of this post, we conclude that, given:
trait Food
trait Fruit extends Food
trait Apple extends Fruit
def foo(someFunction: Fruit => Fruit) = ???
we can supply
val f: Food => Apple = ???
foo(f)
Function f is a valid substitute for someFunction because:
Food is a supertype of Fruit (contravariance of input)
Apple is a subtype of Fruit (covariance of output)
We can explain this in natural language like this:
"Method foo needs a function that can take a Fruit and produce a
Fruit. This means foo will have some Fruit and will need a
function it can feed it to, and expect some Fruit back. If it gets a
function Food => Apple, everything is fine - it can still feed it
Fruit (because the function takes any food), and it can receive
Fruit (apples are fruit, so the contract is respected).
Coming back to your initial dilemma, hopefully this explains why, without any extra information, compiler will resort to lowest possible type for covariant types and highest possible type for contravariant ones. If we want to supply a function to foo, there's one that we know surely works: Any => Nothing.
Variance in general.
Variance in Scala documentation.
Article about variance in Scala (full disclosure: I wrote it).
EDIT:
I think I know what's confusing you.
When you instantiate a Left[String, Nothing], you're allowed to later map it with a function Int => Whatever, or String => Whatever, or Any => Whatever. This is precisly because of the contravariance of function input explained earlier. That's why your map works.
"what is the point of fixing the type to Nothing if it is to change it
later?"
I think it's a bit hard to wrap your head around compiler fixing the unknown type to Nothing in case of contravariance. When it fixes the unknown type to Any in case of covariance, it feels more natural (it can be "Anything"). Because of the duality of covariance and contravariance explained earlier, same reasoning applies for contravariant Nothing and covariant Any.
This is a quote from
Unification of Compile-Time and Runtime Metaprogramming in Scala
by Eugene Burmako
https://infoscience.epfl.ch/record/226166 (p. 95-96)
During type inference, the typechecker collects constraints on missing
type arguments from bounds of type parameters, from types of term
arguments, and even from results of implicit search (type inference
works together with implicit search because Scala supports an analogue
of functional dependencies). One can view these constraints as a
system of inequalities where unknown type arguments are represented as
type variables and order is imposed by the subtyping relation.
After collecting constraints, the typechecker starts a step-by-step
process that, on each step, tries to apply a certain transformation to
inequalities, creating an equivalent, yet supposedly simpler system of
inequalities. The goal of type inference is to transform the original
inequalities to equalities that represent a unique solution of the
original system.
Most of the time, type inference succeeds. In that
case, missing type arguments are inferred to the types represented by
the solution.
However, sometimes type inference fails. For example,
when a type parameter T is phantom, i.e. unused in the term parameters
of the method, its only entry in the system of inequalities will be
L <: T <: U, where L and U are its lower and upper bound respectively.
If L != U, this inequality does not have a unique solution, and that
means a failure of type inference.
When type inference fails, i.e.
when it is unable to take any more transformation steps and its
working state still contains some inequalities, the typechecker breaks
the stalemate. It takes all yet uninferred type arguments, i.e. those
whose variables are still represented by inequalities, and forcibly
minimizes them, i.e. equates them to their lower bounds. This produces
a result where some type arguments are inferred precisely, and some
are replaced with seemingly arbitrary types. For instance,
unconstrained type parameters are inferred to Nothing, which is a
common source of confusion for Scala beginners.
You can learn more about type inference in Scala:
Hubert Plociniczak Decrypting Local Type Inference https://infoscience.epfl.ch/record/214757
Guillaume Martres Scala 3, Type Inference and You! https://www.youtube.com/watch?v=lMvOykNQ4zs
Guillaume Martres Dotty and types: the story so far https://www.youtube.com/watch?v=YIQjfCKDR5A
Slides http://guillaume.martres.me/talks/
Aleksander Boruch-Gruszecki GADTs in Dotty https://www.youtube.com/watch?v=VV9lPg3fNl8

Book: Scala in Depth, def foo[M[_]](f : M[Int]) = f, is _ really an existential type here?

On page 136. page of the book "Scala in Depth" it is written:
But the following experiment suggests that here _ is just the same as any arbitrary type parameter T, so it is perhaps not an existential type.
scala> def foo[M[_]](f : M[Int]) = f
foo: [M[_]](f: M[Int])M[Int]
scala> def foo[M[T]](f : M[Int]) = f
foo: [M[T]](f: M[Int])M[Int]
Also note the section 4.4 of the Scala Language Specification below , this also suggests that _ is the same as T here.
Can someone explain what is going on here ?
M[_] in that context (i.e. as a type parameter declaration) is a higher kinded type (sometimes called a "type constructor"); as you say it's the same as M[X], with the _ just meaning we aren't going to reuse the name.
In a different context (e.g. as a type) the same syntax is sometimes used to mean an existential type M[X] forSome { type X }.
It's unfortunate and confusing that the syntax looks the same, but they're two different, unrelated features. If you're confused about which one a particular use of _ is, maybe check what the compiler/language feature warning is? In my own code I try to always write existential types explicitly (using forSome) to avoid this confusion, but this is just something I came up with, not a rule that libraries tend to follow.

Deciphering one of the toughest scala method prototypes (slick)

Looking at the <> method in the following scala slick class, from http://slick.typesafe.com/doc/2.1.0/api/index.html#scala.slick.lifted.ToShapedValue, it reminds me of that iconic stackoverflow thread about scala prototypes.
def <>[R, U](f: (U) ⇒ R, g: (R) ⇒ Option[U])
(implicit arg0: ClassTag[R], shape: Shape[_ <: FlatShapeLevel, T, U, _]):
MappedProjection[R, U]
Can someone bold and knowledgeable provide an articulate walkthrough of that long prototype definition, carefully clarifying all of its type covariance/invariance, double parameter lists, and other advanced scala aspects?
This exercise will also greatly help dealing with similarly convoluted prototypes!
Ok, let's take a look:
class ToShapedValue[T](val value: T) extends AnyVal {
...
#inline def <>[R: ClassTag, U](f: (U) ⇒ R, g: (R) ⇒ Option[U])(implicit shape: Shape[_ <: FlatShapeLevel, T, U, _]): MappedProjection[R, U]
}
The class is an AnyVal wrapper; while I can't actually see the implicit conversion from a quick look, it smells like the "pimp my library" pattern. So I'm guessing this is meant to add <> as an "extension method" onto some (or maybe all) types.
#inline is an annotation, a way of putting metadata on, well, anything; this one is a hint to the compiler that this should be inlined. <> is the method name - plenty of things that look like "operators" are simply ordinary methods in scala.
The documentation you link has already expanded the R: ClassTag to ordinary R and an implicit ClassTag[R] - this is a "context bound" and it's simply syntactic sugar. ClassTag is a compiler-generated thing that exists for every (concrete) type and helps with reflection, so this is a hint that the method will probably do some reflection on an R at some point.
Now, the meat: this is a generic method, parameterized by two types: [R, U]. Its arguments are two functions, f: U => R and g: R => Option[U]. This looks a bit like the functional Prism concept - a conversion from U to R that always works, and a conversion from R to U that sometimes doesn't work.
The interesting part of the signature (sort of) is the implicit shape at the end. Shape is described as a "typeclass", so this is probably best thought of as a "constraint": it limits the possible types U and R that we can call this function with, to only those for which an appropriate Shape is available.
Looking at the documentation forShape, we see that the four types are Level, Mixed, Unpacked and Packed. So the constraint is: there must be a Shape, whose "level" is some subtype of FlatShapeLevel, where the Mixed type is T and the Unpacked type is R (the Packed type can be any type).
So, this is a type-level function that expresses that R is "the unpacked version of" T. To use the example from the Shape documentation again, if T is (Column[Int], Column[(Int, String)], (Int, Option[Double])) then R will be (Int, (Int, String), (Int, Option[Double]) (and it only works for FlatShapeLevel, but I'm going to make a judgement call that that's probably not important). U is, interestingly enough, completely unconstrained.
So this lets us create a MappedProjection[unpacked-version-of-T, U] from any T, by providing conversion functions in both directions. So in a simple version, maybe T is a Column[String] - a representation of a String column in a database - and we want to represent it as some application-specific type, e.g. EmailAddress. So R=String, U=EmailAddress, and we provide conversion functions in both directions: f: EmailAddress => String and g: String => Option[EmailAddress]. It makes sense that it's this way around: every EmailAddress can be represented as a String (at least, they'd better be, if we want to be able to store them in the database), but not every String is a valid EmailAddress. If our database somehow had e.g. "http://www.foo.com/" in the email address column, our g would return None, and Slick could handle this gracefully.
MappedProjection itself is, sadly, undocumented. But I'm guessing it's some kind of lazy representation of a thing we can query; where we had a Column[String], now we have a pseudo-column-thing whose (underlying) type is EmailAddress. So this might allow us to write pseudo-queries like 'select from users where emailAddress.domain = "gmail.com"', which would be impossible to do directly in the database (which doesn't know which part of an email address is the domain), but is easy to do with the help of code. At least, that's my best guess at what it might do.
Arguably the function could be made clearer by using a standard Prism type (e.g. the one from Monocle) rather than passing a pair of functions explicitly. Using the implicit to provide a type-level function is awkward but necessary; in a fully dependently typed language (e.g. Idris), we could write our type-level function as a function (something like def unpackedType(t: Type): Type = ...). So conceptually, this function looks something like:
def <>[U](p: Prism[U, unpackedType(T)]): MappedProjection[unpackedType(T), U]
Hopefully this explains some of the thought process of reading a new, unfamiliar function. I don't know Slick at all, so I have no idea how accurate I am as to what this <> is used for - did I get it right?

What is and when to use Scala's forSome keyword?

What is the difference between List[T] forSome {type T} and List[T forSome {type T}]? How do I read them in "English"? How should I grok the forSome keyword? What are some practical uses of forSome? What are some useful practical and more complex than simple T forSome {type T} usages?
Attention: (Update 2016-12-08) The forSome keyword is very likely going away with Scala 2.13 or 2.14, according to Martin Odersky's talk on the ScalaX 2016. Replace it with path dependent types or with anonymous type attributes (A[_]). This is possible in most cases. If you have an edge case where it is not possible, refactor your code or loosen your type restrictions.
How to read "forSome" (in an informal way)
Usually when you use a generic API, the API guarantees you, that it will work with any type you provide (up to some given constraints). So when you use List[T], the List API guarantees you that it will work with any type T you provide.
With forSome (so called existentially quantified type parameters) it is the other way round. The API will provide a type (not you) and it guarantees you, it will work with this type it provided you. The semantics is, that a concrete object will give you something of type T. The same object will also accept the things it provided you. But no other object may work with these Ts and no other object can provide you with something of type T.
The idea of "existentially quantified" is: There exists (at least) one type T (in the implementation) to fulfill the contract of the API. But I won't tell you which type it is.
forSome can be read similar: For some types T the API contract holds true. But it is not necessary true for all types T. So when you provide some type T (instead of the one hidden in the implementation of the API), the compiler cannot guarantee that you got the right T. So it will throw a type error.
Applied to your example
So when you see List[T] forSome {type T} in an API, you can read it like this: The API will provide you with a List of some unknown type T. It will gladly accept this list back and it will work with it. But it won't tell you, what T is. But you know at least, that all elements of the list are of the same type T.
The second one is a little bit more tricky. Again the API will provide you with a List. And it will use some type T and not tell you what T is. But it is free to choose a different type for each element. A real world API would establish some constraints for T, so it can actually work with the elements of the list.
Conclusion
forSome is useful, when you write an API, where each object represents an implementation of the API. Each implementation will provide you with some objects and will accept these objects back. But you can neither mix objects from different implementations nor can you create the objects yourself. Instead you must always use the corresponding API functions to get some objects that will work with that API. forSome enables a very strict kind of encapsulation. You can read forSome in the following way:
The API contract folds true for some types. But you don't know for
which types it holds true. Hence you cannot provide you own type and
you cannot create your own objects. You have to use the ones provided
through the API that uses forSome.
This is quite informal and might even be wrong in some corner cases. But it should help you to grok the concept.
There are a lot of questions here, and most of them have been addressed pretty thoroughly in the answers linked in the comments above, so I'll respond to your more concrete first question.
There's no real meaningful difference between List[T] forSome { type T } and List[T forSome { type T }], but we can see a difference between the following two types:
class Foo[A]
type Outer = List[Foo[T]] forSome { type T }
type Inner = List[Foo[T] forSome { type T }]
We can read the first as "a list of foos of T, for some type T". There's a single T for the entire list. The second, on the other hand, can be read as "a list of foos, where each foo is of T for some T".
To put it another way, if we've got a list outer: Outer, we can say that "there exists some type T such that outer is a list of foos of T", where for a list of type Inner, we can only say that "for each element of the list, there exists some T such that that element is a foo of T". The latter is weaker—it tells us less about the list.
So, for example, if we have the following two lists:
val inner: Inner = List(new Foo[Char], new Foo[Int])
val outer: Outer = List(new Foo[Char], new Foo[Int])
The first will compile just fine—each element of the list is a Foo[T] for some T. The second won't compile, since there's not some T such that each element of the list is a Foo[T].

Scala type programming resources

According to this question, Scala's type system is Turing complete. What resources are available that enable a newcomer to take advantage of the power of type-level programming?
Here are the resources I've found so far:
Daniel Spiewak's High Wizardry in the Land of Scala
Apocalisp's Type-Level Programming in Scala
Jesper's HList
These resources are great, but I feel like I'm missing the basics, and so do not have a solid foundation on which to build. For instance, where is there an introduction to type definitions? What operations can I perform on types?
Are there any good introductory resources?
Overview
Type-level programming has many similarities with traditional, value-level programming. However, unlike value-level programming, where the computation occurs at runtime, in type-level programming, the computation occurs at compile time. I will try to draw parallels between programming at the value-level and programming at the type-level.
Paradigms
There are two main paradigms in type-level programming: "object-oriented" and "functional". Most examples linked to from here follow the object-oriented paradigm.
A good, fairly simple example of type-level programming in the object-oriented paradigm can be found in apocalisp's implementation of the lambda calculus, replicated here:
// Abstract trait
trait Lambda {
type subst[U <: Lambda] <: Lambda
type apply[U <: Lambda] <: Lambda
type eval <: Lambda
}
// Implementations
trait App[S <: Lambda, T <: Lambda] extends Lambda {
type subst[U <: Lambda] = App[S#subst[U], T#subst[U]]
type apply[U] = Nothing
type eval = S#eval#apply[T]
}
trait Lam[T <: Lambda] extends Lambda {
type subst[U <: Lambda] = Lam[T]
type apply[U <: Lambda] = T#subst[U]#eval
type eval = Lam[T]
}
trait X extends Lambda {
type subst[U <: Lambda] = U
type apply[U] = Lambda
type eval = X
}
As can be seen in the example, the object-oriented paradigm for type-level programming proceeds as follows:
First: define an abstract trait with various abstract type fields (see below for what an abstract field is). This is a template for guaranteeing that certain types fields exist in all implementations without forcing an implementation. In the lambda calculus example, this corresponds to trait Lambda that guarantees that the following types exist: subst, apply, and eval.
Next: define subtraits that extend the abstract trait and implement the various abstract type fields
Often, these subtraits will be parameterized with arguments. In the lambda calculus example, the subtypes are trait App extends Lambda which is parameterized with two types (S and T, both must be subtypes of Lambda), trait Lam extends Lambda parameterized with one type (T), and trait X extends Lambda (which is not parameterized).
the type fields are often implemented by referring to the type parameters of the subtrait and sometimes referencing their type fields via the hash operator: # (which is very similar to the dot operator: . for values). In trait App of the lambda calculus example, the type eval is implemented as follows: type eval = S#eval#apply[T]. This is essentially calling the eval type of the trait's parameter S, and calling apply with parameter T on the result. Note, S is guaranteed to have an eval type because the parameter specifies it to be a subtype of Lambda. Similarly, the result of eval must have an apply type, since it is specified to be a subtype of Lambda, as specified in the abstract trait Lambda.
The Functional paradigm consists of defining lots of parameterized type constructors that are not grouped together in traits.
Comparison between value-level programming and type-level programming
abstract class
value-level: abstract class C { val x }
type-level: trait C { type X }
path dependent types
C.x (referencing field value/function x in object C)
C#x (referencing field type x in trait C)
function signature (no implementation)
value-level: def f(x:X) : Y
type-level: type f[x <: X] <: Y (this is called a "type constructor" and usually occurs in the abstract trait)
function implementation
value-level: def f(x:X) : Y = x
type-level: type f[x <: X] = x
conditionals
see here
checking equality
value-level: a:A == b:B
type-level: implicitly[A =:= B]
value-level: Happens in the JVM via a unit test at runtime (i.e. no runtime errors):
in essense is an assert: assert(a == b)
type-level: Happens in the compiler via a typecheck (i.e. no compiler errors):
in essence is a type comparison: e.g. implicitly[A =:= B]
A <:< B, compiles only if A is a subtype of B
A =:= B, compiles only if A is a subtype of B and B is a subtype of A
A <%< B, ("viewable as") compiles only if A is viewable as B (i.e. there is an implicit conversion from A to a subtype of B)
an example
more comparison operators
Converting between types and values
In many of the examples, types defined via traits are often both abstract and sealed, and therefore can neither be instantiated directly nor via anonymous subclass. So it is common to use null as a placeholder value when doing a value-level computation using some type of interest:
e.g. val x:A = null, where A is the type you care about
Due to type-erasure, parameterized types all look the same. Furthermore, (as mentioned above) the values you're working with tend to all be null, and so conditioning on the object type (e.g. via a match statement) is ineffective.
The trick is to use implicit functions and values. The base case is usually an implicit value and the recursive case is usually an implicit function. Indeed, type-level programming makes heavy use of implicits.
Consider this example (taken from metascala and apocalisp):
sealed trait Nat
sealed trait _0 extends Nat
sealed trait Succ[N <: Nat] extends Nat
Here you have a peano encoding of the natural numbers. That is, you have a type for each non-negative integer: a special type for 0, namely _0; and each integer greater than zero has a type of the form Succ[A], where A is the type representing a smaller integer. For instance, the type representing 2 would be: Succ[Succ[_0]] (successor applied twice to the type representing zero).
We can alias various natural numbers for more convenient reference. Example:
type _3 = Succ[Succ[Succ[_0]]]
(This is a lot like defining a val to be the result of a function.)
Now, suppose we want to define a value-level function def toInt[T <: Nat](v : T) which takes in an argument value, v, that conforms to Nat and returns an integer representing the natural number encoded in v's type. For example, if we have the value val x:_3 = null (null of type Succ[Succ[Succ[_0]]]), we would want toInt(x) to return 3.
To implement toInt, we're going to make use of the following class:
class TypeToValue[T, VT](value : VT) { def getValue() = value }
As we will see below, there will be an object constructed from class TypeToValue for each Nat from _0 up to (e.g.) _3, and each will store the value representation of the corresponding type (i.e. TypeToValue[_0, Int] will store the value 0, TypeToValue[Succ[_0], Int] will store the value 1, etc.). Note, TypeToValue is parameterized by two types: T and VT. T corresponds to the type we're trying to assign values to (in our example, Nat) and VT corresponds to the type of value we're assigning to it (in our example, Int).
Now we make the following two implicit definitions:
implicit val _0ToInt = new TypeToValue[_0, Int](0)
implicit def succToInt[P <: Nat](implicit v : TypeToValue[P, Int]) =
new TypeToValue[Succ[P], Int](1 + v.getValue())
And we implement toInt as follows:
def toInt[T <: Nat](v : T)(implicit ttv : TypeToValue[T, Int]) : Int = ttv.getValue()
To understand how toInt works, let's consider what it does on a couple of inputs:
val z:_0 = null
val y:Succ[_0] = null
When we call toInt(z), the compiler looks for an implicit argument ttv of type TypeToValue[_0, Int] (since z is of type _0). It finds the object _0ToInt, it calls the getValue method of this object and gets back 0. The important point to note is that we did not specify to the program which object to use, the compiler found it implicitly.
Now let's consider toInt(y). This time, the compiler looks for an implicit argument ttv of type TypeToValue[Succ[_0], Int] (since y is of type Succ[_0]). It finds the function succToInt, which can return an object of the appropriate type (TypeToValue[Succ[_0], Int]) and evaluates it. This function itself takes an implicit argument (v) of type TypeToValue[_0, Int] (that is, a TypeToValue where the first type parameter is has one fewer Succ[_]). The compiler supplies _0ToInt (as was done in the evaluation of toInt(z) above), and succToInt constructs a new TypeToValue object with value 1. Again, it is important to note that the compiler is providing all of these values implicitly, since we do not have access to them explicitly.
Checking your work
There are several ways to verify that your type-level computations are doing what you expect. Here are a few approaches. Make two types A and B, that you want to verify are equal. Then check that the following compile:
Equal[A, B]
with: trait Equal[T1 >: T2 <: T2, T2] (taken from apocolisp)
implicitly[A =:= B]
Alternatively, you can convert the type to a value (as shown above) and do a runtime check of the values. E.g. assert(toInt(a) == toInt(b)), where a is of type A and b is of type B.
Additional Resources
The complete set of available constructs can be found in the types section of the scala reference manual (pdf).
Adriaan Moors has several academic papers about type constructors and related topics with examples from scala:
Generics of a higher kind (pdf)
Type Constructor Polymorphism for Scala: Theory and Practice (pdf) (PhD thesis, which includes the previous paper by Moors)
Type Constructor Polymorphism Inference
Apocalisp is a blog with many examples of type-level programming in scala.
Type-Level Programming in Scala is a fantastic guided tour of some type-level programming which includes booleans, natural numbers (as above), binary numbers, heterogeneous lists, and more.
More Scala Typehackery is the lambda calculus implementation above.
ScalaZ is a very active project that is providing functionality that extends the Scala API using various type-level programming features. It is a very interesting project that has a big following.
MetaScala is a type-level library for Scala, including meta types for natural numbers, booleans, units, HList, etc. It is a project by Jesper Nordenberg (his blog).
The Michid (blog) has some awesome examples of type-level programming in Scala (from other answer):
Meta-Programming with Scala Part I: Addition
Meta-Programming with Scala Part II: Multiplication
Meta-Programming with Scala Part III: Partial function application
Meta-Programming with Scala: Conditional Compilation and Loop Unrolling
Scala type level encoding of the SKI calculus
Debasish Ghosh (blog) has some relevant posts as well:
Higher order abstractions in scala
Static typing gives you a head start
Scala implicits type classes, here I come
Refactoring into scala type-classes
Using generalized type constraints
How scalas type system words for you
Choosing between abstract type members
(I've been doing some research on this subject and here's what I've learned. I'm still new to it, so please point out any inaccuracies in this answer.)
In addition to the other links here, there are also my blog posts on type level meta programming in Scala:
Meta-Programming with Scala Part I: Addition
Meta-Programming with Scala Part II: Multiplication
Meta-Programming with Scala Part III: Partial function application
Meta-Programming with Scala: Conditional Compilation and Loop Unrolling
Scala type level encoding of the SKI calculus
As suggested on Twitter: Shapeless: An exploration of generic/polytypic programming in Scala by Miles Sabin.
Sing, a type-level metaprogramming library in Scala.
The Beginning of Type-level Programming in Scala
Scalaz has source code, a wiki and examples.