Covariance/contravariance and the relationship with consumers/producers - covariance

I have read this article on covariance/contravariance: http://julien.richard-foy.fr/blog/2013/02/21/be-friend-with-covariance-and-contravariance/
The examples are very clear. However, I am struggling to understand the conclusions drawn at the end:
If you look at the definitions of Run[+A] and Vet[-A] you may notice
that the type Aappears only in the return type of methods of Run[+A]
and only in the parameters of methods of Vet[-A]. More generally a
type that produces values of type A can be made covariant on A (as you
did with Run[+A]), and a type that consumes values of type A can be
made contravariant on A (as you did with Vet[-A]).
From the above paragraph you can deduce that types that only have
getters can be covariant (in other words, immutable data types can be
covariant, and that’s the case for most of the data types of Scala’s
standard library), but mutable data types are necessarily invariant
(they have getters and setters, so they both produce and consume
values).
Producers: If something produces type A, I can imagine some reference variable of type A being set to an object of type A or any subtypes of A, but not supertypes, so it's appropriate that it can be covariant.
Consumers: If something consumes type A, I guess that means type A may be used as parameters in methods. I'm not clear what relationship this has to covariance or contravariance.
It seems from the examples that specifying a type as covariant/contravariant affects how it can be consumed by other functions but not sure how it affects the classes themselves.

It seems from the examples that specifying a type as covariant/contravariant affects how it can be consumed by other functions but not sure how it affects the classes themselves.
It is right that the article focused on the consequences of variance for users of a class, not for implementers of a class.
The article shows that covariant and contravariant types give more freedom to users (because a function that accepts a Run[Mammal] effectively accepts a Run[Giraffe] or a Run[Zebra]). For implementors, the perspective is dual: covariant and contravariant types give them more constraints.
These constraints are that covariant types can not occur in contravariant positions and vice versa.
Consider for instance this Producer type definition:
trait Producer[+A] {
def produce(): A
}
The type parameter A is covariant. Therefore we can only use it in covariant positions (such as a method return type), but we can not use it in contravariant position (such as a method parameter):
trait Producer[+A] {
def produce(): A
def consume(a: A): Unit // (does not compile because A is in contravariant position)
}
Why is it illegal to do so? What could go wrong if this code compiled? Well, consider the following scenario. First, get some Producer[Zebra]:
val zebraProducer: Producer[Zebra] = …
Then upcast it to a Producer[Mammal] (which is legal, because we claimed the type parameter to be covariant):
val mammalProducer: Producer[Mammal] = zebraProducer
Finally, feed it with a Giraffe (which is legal too because the consume method a Producer[Mammal] accepts a Mammal, and a Giraffe is a Mammal):
mammalProducer.consume(new Giraffe)
However, if you remember well, the mammalProducer was actually a zebraProducer, so its consume implementation actually only accepts a Zebra, not a Giraffe! So, in practice, if it was allowed to use covariant types in contravariant positions (like I did with the consume method), the type system would be unsound. We can construct a similar scenario (leading to an absurdity) if we pretend that a class with a contravariant type parameter can also have a method where it is in covariant position (see at the end for the code).
(Note that several programming languages, e.g. Java or TypeScript, have such unsound type systems.)
In practice, in Scala if we want to use a covariant type parameter in contravariant position, we have to use the following trick:
trait Producer[+A] {
def produce(): A
def consume[B >: A](b: B): Unit
}
In that case, a Producer[Zebra] would not expect to get an actual Zebra passed in the consume method (but any value of a type B, lower-bounded by Zebra), so it would be legal to pass a Giraffe, which is a Mammal, which is a super-type of Zebra.
Appendix: similar scenario for contravariance
Consider the following class Consumer[-A], which has a contravariant type parameter A:
trait Consumer[-A] {
def consume(a: A): Unit
}
Suppose that the type system allowed us to define a method where A is in covariant position:
trait Consumer[-A] {
def consume(a: A): Unit
def produce(): A // (does not actually compile because A is in covariant position)
}
Now we can get an instance of Consumer[Mammal], upcast it to Consumer[Zebra] (because of contravariance) and call the produce method to get a Zebra:
val mammalConsumer: Consumer[Mammal] = …
val zebraConsumer: Consumer[Zebra] = mammalConsumer // legal, because we claimed `A` to be contravariant
val zebra: Zebra = zebraConsumer.produce()
However, our zebraConsumer is actually mammalConsumer, whose method produce can return any Mammal, not just Zebras. So, at the end, zebra might be initialized to some Mammal that is not a Zebra! In order to avoid such absurdities, the type system forbids us to define the produce method in the Consumer class.

Related

How are primitives types objects in Scala?

How are primitive types in Scala objects if we do not use the word "new" to instantiate the instances of those primitives? Programming in Scala by Martin Odersky described the reasoning as some enforcing by a "trick" that makes these value classes to be defined abstract and final, which did not quite make sense to me because how are we able to make an instance of these classes if its abstract? If that same primitive literal is to be stored somewhere let's say into a variable will that make the variable an object?
I assume that you use scala 2.13 with implementation of literal types. For this explanation you can think of type and class as synonyms, but in reality they are different concepts.
To put it all together it worth to treat each primitive type as a set of subtypes each of which representing type of one single literal value.
So literal 1 is a value and type at the same time (instance 1 of type 1), and it is subtype of value class Int.
Let's prove that 1 is subtype of Int by using 'implicitly':
implicitly[1 <:< Int] // compiles
The same but using val:
val one:1 = 1
implicitly[one.type <:< Int] // compiles
So one is kind of an instance (object) of type 1 (and instance of type Int at the same time because because Int is supertype of 1). You can use this value the same way as any other objects (pass it to function or assign to other vals etc).
val one:1 = 1
val oneMore: 1 = one
val oneMoreGeneric: Int = one
val oneNew:1 = 1
We can assume that all these vals contain the same instance of one single object because from practical perspective it doesn't actually matter if this is the same object or not.
Technically it's not an object at all, because primitives came form java (JVM) world where primitives are not objects. They are different kind of entities.
Scala language is trying to unify these two concepts into one (everything is classes), so developers don't have to think too much about differences.
But here are still some differences in a backstage. Each value class is a subtype of AnyVal, but the rest of the classes are subtype of AnyRef (regular class).
implicitly[1 <:< AnyVal] //compiles
implicitly[Int <:< AnyVal] // compiles
trait AnyTraint
implicitly[AnyTraint <:< AnyVal] // fails to compail
implicitly[AnyTraint <:< AnyRef] // compiles
And in addition, because of its non-class nature in the JVM, you can't extend value classes as regular class or use new to create an instance (because scala compiler emulates new by itself). That's why from perspective of extending value classes you should think about them as final and from perspective of creating instances manually you should think of them as abstract. But form most of the other perspectives it's like any other regular class.
So scala compiler can kind of extend Int by 1,2,3 .. types and create instances of them for vals, but developers can't do it manually.

"Nothing is a subclass of every other class", how to understand it?

These days, I am reading the book "programming in scala". There is one sentence in the book on Page 246, Chapter11, first paragraph:
For example, just as Any is a superclass of every other class, Nothing
is a subclass of every other class.
I understand the first part because every class inherits Any in a direct or indirect way. But I can't understand the latter part of the sentence.
This is class Nothing definition:
abstract final class Nothing extends Any
Conceptually, Nothing is something that is harder to grasp than Any, which we're familiar with from Java and most other object oriented programming. Nothing is Scalas bottom type. The definition from Wikipedia states:
In subtyping systems, the bottom type is the subtype of all types.
(However, the converse is not true—a subtype of all types is not
necessarily the bottom type.) It is used to represent the return type
of a function that does not return a value: for instance, one which
loops forever, signals an exception, or exits.
I think the easiest way to reason about it is to think of it as a way of describing something that never returns. You can't construct an instance of a Nothing type yourself. One particular use of Nothing in Scala is to be able to reason about covariant parametric types:
In Scala, the bottom type is denoted as Nothing. Besides its use for
functions that just throw exceptions or otherwise don't return
normally, it's also used for covariant parameterized types. For
example, Scala's List is a covariant type constructor, so
List[Nothing] is a subtype of List[A] for all types A. So Scala's Nil,
the object for marking the end of a list of any type, belongs to the
type List[Nothing].
Being a subtype of is not the same as being a subclass of. From Subtyping:
Subtyping should not be confused with the notion of (class or object)
inheritance from object-oriented languages; subtyping is a relation
between types (interfaces in object-oriented parlance) whereas
inheritance is a relation between implementations stemming from a
language feature that allows new objects to be created from existing
ones. In a number of object-oriented languages, subtyping is called
interface inheritance.
This is also denoted in the Scala Specification Section §3.5.2 (Conformance) as part of the <: (is subtype of relation), both for value types and type constructors:
For every value type T, scala.Nothing <: T <: scala.Any.
For every type constructor T (with any number of type parameters), scala.Nothing <: T <: scala.Any.
One familiar aspect of subtyping is variance of generic types, where - denotes contravariance and + denotes covariance.

Casting "hollow" higher kinded type values to avoid instantiations

I caught myself watching a bit of the Scalawags#2 recording, and then there came this part about type erasure and Dick Wall pointing out that reflection will eventually bite you in the feet.
So I was thinking about something that I'm doing quite frequently (and I saw it the implementation of Scala Collections as well). Let's say I have a system with a serializer taking the system as type parameter:
trait Sys[S <: Sys[S]] { type Tx }
trait FooSys extends Sys[FooSys]
trait Serializer[S <: Sys[S], A] {
def read(implicit tx: S#Tx): A
}
Now there are many types A for which serializers can be constructed without value parameters, so essentially the system type parameter is "hollow". And since serializers are heavily invoked in my example, I'm saving instantiation:
object Test {
def serializer[S <: Sys[S]] : Serializer[S, Test[S]] =
anySer.asInstanceOf[Ser[S]]
private val anySer = new Ser[FooSys]
private final class Ser[S <: Sys[S]] extends Serializer[S, Test[S]] {
def read(implicit tx: S#Tx) = new Test[S] {} // (shortened for the example)
}
}
trait Test[S <: Sys[S]]
I know this is correct, but of course, asInstanceOf has a bad smell. Are there any suggestions to this approach? Let me add two things
moving the type parameter from the constructor of trait Serializer to the read method is not an option (there are specific serializers which require value arguments parametrised in S)
adding variance to Serializer's type constructor parameter is not an option
Introduction:
I am a little confused by your example and I might have misunderstood your question, I have a feeling there is a certain type recursion between S and Tx that I am not getting from your question(because if not, S#Tx could be anything and I don't understand the problem with the anySer)
Tentative Answer:
At compile time, for any instance of Ser[T] there will be a well-defined type parameter T, since you want to save it on instantiation, you will have a single anySer Ser[T] for a given specific type A
What you are saying in some way is that a Ser[A] will work as Ser[S] for any S. This can be explained in two ways, according to the relationship between type A and S.
If this conversion is possible for every A<:<S then your serializer is COVARIANT and you can initialize your anySer as a Ser[Nothing] . Since Nothing is subclass of every class in Scala, your anySer will always work as a Ser[Whatever]
If this conversion is possible for every S<:<A then your serializer is CONTRAVARIANT and you can initialize your anySer as a Ser[Any] . Since Any is subclass of every class in Scala, your anySer will always work as a Ser[Whatever]
If it's neither the one of the previous case, then it means that:
def serializer[S <: Sys[S]] : Serializer[S, Test[S]] =
anySer.asInstanceOf[Ser[S]]
Could produce an horrible failure at runtime, because there will some S for which the Serializer won't work. If there are no such S for which this could happen, then your class falls in either 1 or
Comment post-edit
If your types are really invariant, the conversion through a cast breaks the invariance relation. You are basically forcing the type system to perform an un-natural conversion because you know that nothing wrong will happen, on the basis of your own knowledge of the code you have written. If this is the case then casting is the right way to go: you are forcing a different type from the one the compiler can check formally and you are making this explicit. I would even put a big comment saying why you know that operation is legal and the compiler can't guess and eventually attach a beautiful unit test to verify that the "in-formal" relation always holds.
In general, I believe this practice should be used with extreme care. One of the benefits of strongly typed languages is that the compiler performs formal type checking that helps you catch early errors. If you intentionally break it, you give away this big benefit.

Scala: Using representation types

Because traits with representation types are self-referential, declaring that a variable holds an instance of that trait is a little difficult. In this example I simply declare that a variable holds an instance of the trait, declare that a function takes and returns and instance of that trait, and call that function with the variable:
trait Foo[+A <: Foo[A]]
case class Bar() extends Foo[Bar]
case class Grill() extends Foo[Grill]
// Store a generic instance of Foo
val b: Foo[_] = if(true) {
Bar()
} else {
Grill()
}
// Declare a function that take any Foo and returns a Foo of the same type
// that "in" has in the calling context
def echoFoo[A <: Foo[A]](in: A): A = in
// Call said function
val echo = echoFoo(b)
It fails with the error:
inferred type arguments [this.Foo[_$1]] do not conform to method
echoFoo's type parameter bounds [A <: this.Foo[A]]
val echo = echoFoo(b)
^
Now, this makes sense because [_] is like Any (in ways I don't fully understand). What it probably wants is something like Foo[Foo[_]], so that the type parameter conforms to the bounds of A <: Foo[A]. But now there's an inner Foo that has a non-conforming type parameter, suggesting that the solution is something like Foo[Foo[Foo[Foo[..., which is clearly not correct.
So my question can probably be distilled down to: What is the Scala syntax for "This variable holds any legal Foo"?
Self-referential type parameters like this are a bit problematic, because they're not sound. For example, it's possible to define a type like the following:
case class BeerGarden extends Foo[Grill]
As you can see, the A <: Foo[A] bound isn't sufficiently tight. What I prefer in situations like this is to use the cake pattern, and abstract type members:
trait FooModule {
type Foo <: FooLike
def apply(): Foo
trait FooLike {
def echo: Foo
}
}
Now you can use the Foo type recursively and safely:
object Foos {
def echo(foo: FooModule#Foo) = foo.echo
}
Obviously, this isn't an ideal solution to all the problems you might want to solve with such types, but the important observation is that FooLike is an extensible trait, so you can always continue to refine FooLike to add the members that you need, without violating the bound that the type member is intended to enforce. I've found that in every real-world case where the set of types I want to represent is not closed, this is about the best that one can do. The important thing to see is that FooModule abstracts over both the type and the instance constructor, while enforcing the "self-type." You can't abstract over one without abstracting over the other.
Some additional information on this sort of thing (and a bit of a record of my own early struggles with recursive types) is available here:
https://issues.scala-lang.org/browse/SI-2385
While I agree the problem of propagating generics exists, when you hit this problem you should see a big WARNING on your screen because its typically a sign of a bad design. These are general suggestions on the topic.
If you use generics, the type parameter is there for a reason. It lets you interact with a Foo[A] in a type-safe manner by passing in or receiving parameters of type A and allows to put you constraint on A. If you lose the type information, you lose the type-safety and in that case so there is no point of writing a generic class if you do not need the generic anymore: you can change all your signatures to Any and do pattern matching.
In most of the cases, recursive types can be avoided by implementing something like the CanBuildFrom approach for collections, using a "typeclass"
Finally,type-projection (FooModule#Foo) has little application and you might want to look to path-dependent types. However, these have little application as well.

"Parameter type in structural refinement may not refer to an abstract type defined outside that refinement"

When I compile:
object Test extends App {
implicit def pimp[V](xs: Seq[V]) = new {
def dummy(x: V) = x
}
}
I get:
$ fsc -d aoeu go.scala
go.scala:3: error: Parameter type in structural refinement may not refer to an abstract type defined outside that refinement
def dummy(x: V) = x
^
one error found
Why?
(Scala: "Parameter type in structural refinement may not refer to an abstract type defined outside that refinement" doesn't really answer this.)
It's disallowed by the spec. See 3.2.7 Compound Types.
Within a method declaration in a structural refinement, the type of any value parameter may only refer to type parameters or abstract types that are contained inside the refinement. That is, it must refer either to a type parameter of the method
itself, or to a type definition within the refinement. This restriction does not apply
to the function’s result type.
Before Bug 1906 was fixed, the compiler would have compiled this and you'd have gotten a method not found at runtime. This was fixed in revision 19442 and this is why you get this wonderful message.
The question is then, why is this not allowed?
Here is very detailed explanation from Gilles Dubochet from the scala mailing list back in 2007. It roughly boils down to the fact that structural types use reflection and the compiler does not know how to look up the method to call if it uses a type defined outside the refinement (the compiler does not know ahead of time how to fill the second parameter of getMethod in p.getClass.getMethod("pimp", Array(?))
But go look at the post, it will answer your question and some more.
Edit:
Hello list.
I try to define structural types with abstract datatype in function
parameter. ... Any reason?
I have heard about two questions concerning the structural typing
extension of Scala 2.6 lately, and I would like to answer them here.
Why did we change Scala's native values (“int”, etc.) boxing scheme
to Java's (“java.lang.Integer”).
Why is the restriction on parameters for structurally defined
methods (“Parameter type in structural refinement may not refer
to abstract type defined outside that same refinement”) required.
Before I can answer these two questions, I need to speak about the
implementation of structural types.
The JVM's type system is very basic (and corresponds to Java 1.4). That
means that many types that can be represented in Scala cannot be
represented in the VM. Path dependant types (“x.y.A”), singleton types
(“a.type”), compound types (“A with B”) or abstract types are all types
that cannot be represented in the JVM's type system.
To be able to compile to JVM bytecode, the Scala compilers changes the
Scala types of the program to their “erasure” (see section 3.6 of the
reference). Erased types can be represented in the VM's type system and
define a type discipline on the program that is equivalent to that of
the program typed with Scala types (saving some casts), although less
precise. As a side note, the fact that types are erased in the VM
explains why operations on the dynamic representation of types (pattern
matching on types) are very restricted with respect to Scala's type
system.
Until now all type constructs in Scala could be erased in some way.
This isn't true for structural types. The simple structural type “{ def
x: Int }” can't be erased to “Object” as the VM would not allow
accessing the “x” field. Using an interface “interface X { int x{}; }”
as the erased type won't work either because any instance bound by a
value of this type would have to implement that interface which cannot
be done in presence of separate compilation. Indeed (bear with me) any
class that contains a member of the same name than a member defined in
a structural type anywhere would have to implement the corresponding
interface. Unfortunately this class may be defined even before the
structural type is known to exist.
Instead, any reference to a structurally defined member is implemented
as a reflective call, completely bypassing the VM's type system. For
example def f(p: { def x(q: Int): Int }) = p.x(4) will be rewritten
to something like:
def f(p: Object) = p.getClass.getMethod("x", Array(Int)).invoke(p, Array(4))
And now the answers.
“invoke” will use boxed (“java.lang.Integer”) values whenever the
invoked method uses native values (“int”). That means that the above
call must really look like “...invoke(p, Array(new
java.lang.Integer(4))).intValue”.
Integer values in a Scala program are already often boxed (to allow the
“Any” type) and it would be wasteful to unbox them from Scala's own
boxing scheme to rebox them immediately as java.lang.Integer.
Worst still, when a reflective call has the “Any” return type,
what should be done when a java.lang.Integer is returned? The called
method may either be returning an “int” (in which case it should be
unboxed and reboxed as a Scala box) or it may be returning a
java.lang.Integer that should be left untouched.
Instead we decided to change Scala's own boxing scheme to Java's. The
two previous problems then simply disappear. Some performance-related
optimisations we had with Scala's boxing scheme (pre-calculate the
boxed form of the most common numbers) were easy to use with Java
boxing too. In the end, using Java boxing was even a bit faster than
our own scheme.
“getMethod”'s second parameter is an array with the types of the
parameters of the (structurally defined) method to lookup — for
selecting which method to get when the name is overloaded. This is the
one place where exact, static types are needed in the process of
translating a structural member call. Usually, exploitable static types
for a method's parameter are provided with the structural type
definition. In the example above, the parameter type of “x” is known to
be “Int”, which allows looking it up.
Parameter types defined as abstract types where the abstract type is
defined inside the scope of the structural refinement are no problem
either:
def f(p: { def x[T](t: T): Int }) = p.xInt
In this example we know that any instance passed to “f” as “p” will
define “x[T](t: T)” which is necessarily erased to “x(t: Object)”. The
lookup is then correctly done on the erased type:
def f(p: Object) = p.getClass.getMethod("x", Array(Object)).invoke(p,
Array(new java.lang.Integer(4)))
But if an abstract type from outside the structural refinement's scope
is used to define a parameter of a structural method, everything breaks:
def f[T](p: { def x(t: T): Int }, t: T) = p.x(t)
When “f” is called, “T” can be instantiated to any type, for example:
f[Int]({ def x(t: Int) = t }, 4)
f[Any]({ def x(t: Any) = 5 }, 4)
The lookup for the first case would have to be “getMethod("x",
Array(int))” and for the second “getMethod("x", Array(Object))”, and
there is no way to know which one to generate in the body of
“f”: “p.x(t)”.
To allow defining a unique “getMethod” call inside “f”'s body for
any instantiation of “T” would require any object passed to “f” as the
“p” parameter to have the type of “t” erased to “Any”. This would be a
transformation where the type of a class' members depend on how
instances of this class are used in the program. And this is something
we definitely don't want to do (and can't be done with separate
compilation).
Alternatively, if Scala supported run-time types one could use them to
solve this problem. Maybe one day ...
But for now, using abstract types for structural method's parameter
types is simply forbidden.
Sincerely,
Gilles.
Discovered the problem shortly after posting this: I have to define a named class instead of using an anonymous class. (Still would love to hear a better explanation of the reasoning though.)
object Test extends App {
case class G[V](xs: Seq[V]) {
def dummy(x: V) = x
}
implicit def pimp[V](xs: Seq[V]) = G(xs)
}
works.