Scala upper bound for generic in field - scala

I have next field:
var operations = Map.empty[Long, _ <: Operation]
I want to get second generic parameter upper bound extends Operation class. When I'm doing like above I have error unbound wildcard type.
How can I achieve this?

Map is defined as trait Map[A, +B], so Operation is covariant - an upper-bound type in this example.
Just say Map.empty[Long, Operation]

I'm going to address the actual error at hand. As unnecessary as it is, it will work if you define it like this:
var operations: Map[Long, _ <: Operation] = Map.empty // Or some Map that conforms
The difference is that in the above code, we're saying operations has type Map[Long, _ <: Operation] -- which is a map from Long to some type that we don't care about, so long as it is bounded above by Operation. But Map.empty is a method call which expects some actual types to be supplied as type-parameters (or will be inferred as Nothing) and not an existential.
Of course, this is all unnecessary because Map is covariant over its second type parameter. That means that if you have some Z that is a sub-type of Operation, then Map[Long, Z] is a sub-type of Map[Long, Operation].

Related

Scala upper bound notation accepts all types (not only super type)

In Scala's upper bound concept, the given type or its super type can be passed. For example, in the below method S is the type and A is the parameter we pass. This method accepts all the values present in Scala's type system actually. S, S subtype and its super type. This is due to the fact that all types extends Any type.
def method[A >: S](a:A) = { ... }
Then why can't we write all the upper bound notations as Any (which is the universal type in Scala). The above definition can be re-written as:
def met(a:Any) = { ... }
This is easy to understand.
What sort of advantage the upperbound brings in ?
Thanks!
It allows you to lose less type information than you would by going to Any.
For example If you have Dog and Cat inherit Animal, this works:
val maybeDog: Option[Dog] = ???
val pet = maybeDog.getOrElse(Cat())
Because getOrElse has a signature of def getOrElse[B >: A](default: => B): B, it is inferred that B is Animal (as the least upper bound of Cat and Dog), so the static type of val pet is Animal. If it was using Any, the result would require unsafe casting to work further. If it was not introducing a new type parameter altogether, you would be forced to write (maybeDog: Option[Animal]).getOrElse(Cat()) to achieve the same unification.
Additionally, it restricts the implementation to not do something completely silly. For example, this typechecks:
def getOrElse[A](option: Option[A])(default: => Any): Any = 42 // Int is Any, so why not?
While this doesn't:
def getOrElse[A, B >: A](option: Option[A])(default: => B): B = 42
Because while anything can go as B, that doesn't imply that Int is always a subtype of B.

How do I get the type signature to agree with the scala compiler when extracting a value out of an Option[A]?

Let's say I have a variable x that is of type Option[Double]. I want to get that double out of the variable, so I would suspect that x.getOrElse(None) would be the solution.
The type signature of getOrElse is as follows:
def getOrElse[B >: A](default: => B): B
None is simply an Option[Nothing]. If I write the following:
def mean(xs: Seq[Double]): Option[Double] =
if (xs.isEmpty) None
else Some(xs.sum / xs.length)
val avg = mean(xs) getOrElse(None) // compiles
val theAvg: Double = mean(xs) getOrElse(None) // doesn't compile
What is going on here with the types? The REPL tells me(using :t) that avg is of type Any. How can this be? Why does the type match in the first place when None is of type Option[Nothing]?
What am I not understanding about the types here?
As the type signature says, getOrElse returns a B which is a super type of A (which is the type inside the Option).
That makes sense since, if you have a Some(a: A), then you return that a, which is of type A, which given it is a subtype of B, it can be upcasted to B without any problem. And, if it is a None, then you return the default which is of type B.
In your case, you are using None for your default, as such the compiler has to figure out what is the least weak upper bound between Double & Option[Nothing], which, as you can see, it is Any (because Any is the supertype of every type, and there is not other relation between Double & Option in the type hierarchy).
Now, I believe you are misunderstanding how getOrElse works, and what is the purpose of None.
I think this is what you really want.
// Or any other double that makes sense as a default for you.
val avg: double = mean(xs).getOrElse(default = 0.0d)
How can this be?
The inferred type Any here is a result of the compiler trying to find a common ancestor of Double and Option[Nothing]. The first common ancestor for both these types is Any, and that is why the compiler is inferring that.
Why does the type match in the first place when None is of type
Option[Nothing]
Great question. Let's inspect the signature for getOrElse:
final def getOrElse[B >: A](default: => B): B
Let's zoom in on the constraint B >: A, what does that mean? It means that we can specify any type B that is a supertype of A. Why do we want it to be a super type of A to begin with? Why not A itself? If the type parameter forced us to use A here, we'd be constrained to Double and your example wouldn't compile. To answer that question, we need to look at the definition of Option[A]:
sealed abstract class Option[+A]
We see that A has a + next to it. That plus indicates the presence of covariance. In short, covariance allows us to preserve the "is subtype relation" between types which are themselves defined inside other "container" types, making the following relation hold:
A <: B <=> Option[A] <: Option[B]
Which means if A is a subtype of B, we can treat Option[A] as a subtype of Option[B]. Why does this matter and what does this have to do with the method signature of getOrElse? Well, covariance imposes restrictions on type parameters! Covariant type parameters cannot be used as input parameters, or in input parameter position (a position is contravariant if it occurs under an odd number of contravariant type constructors, but the explanation for that is too lengthy so I'll abbreviate), they can only be placed as output parameters in the underlying type containing the type parameter. Thus, the following definition is illegal due to variance laws:
final def getOrElse(default: => A): A
Because A appears here both in the input type and the output type.
Luis did a good job explaining why you got an Any for your result type. I just wanted to add that if you wanted to use the Double for further processing, but still correctly handle a None, you would typically do that with a map, to be able to stay inside the Option until you actually have a useful value to replace None with, like:
val message: Option[String] = avg map {x => s"The average is $x."}
println(message getOrElse "Can't take average of empty sequence.")

Ordering can't take parameter type for context bound

The following code is using context bound:
def max[T: Ordering](a: T, b: T): T = {
val ord = implicitly(Ordering[T])
if (ord.compare(a, b) > 0) a else b
}
In the [T:Ordering] part, Ordering doesn't take type parameter, if I write as
[T:Ordering[T]], the compiler complains that Ordering can't take type parameter.
But Ordering indeed could take type parameter, and I think generic type must take type parameter.
Do I miss some special rules here?
Thanks
[Left: Right] (a context bound) requires types Left of some kind k and Right of kind k -> *. That is, Left is some type or a type constructor, and Right is type constructor that can take Left as an argument to produce a concrete type. So, Right is never a concrete type, but Right[Left] always is.
Since all that is probably mathy gibberish: [T: Ordering] desugars to implicit ord: Ordering[T], which works, but [T: Ordering[T]] desugars to implicit fail: Ordering[T][T], which is nonsense. So, to your claim that Ordering needs a type argument, I say that it does need one, but instead of you writing the type argument, the compiler does it for you. That's why context bounds were created, so that when you write [T: Ordering] you don't need to write T twice.

Scala type lowerbound bug?

case class Level[B](b: B){
def printCovariant[A<:B](a: A): Unit = println(a)
def printInvariant(b: B): Unit = println(b)
def printContravariant[C>:B](c: C): Unit = println(c)
}
class First
class Second extends First
class Third extends Second
//First >: Second >: Third
object Test extends App {
val second = Level(new Second) //set B as Second
//second.printCovariant(new First) //error and reasonable
second.printCovariant(new Second)
second.printCovariant(new Third)
//second.printInvariant(new First) //error and reasonable
second.printInvariant(new Second)
second.printInvariant(new Third) //why no error?
second.printContravariant(new First)
second.printContravariant(new Second)
second.printContravariant(new Third) //why no error?
}
It seems scala's lowerbound type checking has bugs... for invariant case and contravariant case.
I wonder above code are have bugs or not.
Always keep in mind that if Third extends Second then whenever a Second is wanted, a Third can be provided. This is called subtype polymorhpism.
Having that in mind, it's natural that second.printInvariant(new Third) compiles. You provided a Third which is a subtype of Second, so it checks out. It's like providing an Apple to a method which takes a Fruit.
This means that your method
def printCovariant[A<:B](a: A): Unit = println(a)
can be written as:
def printCovariant(a: B): Unit = println(a)
without losing any information. Due to subtype polymorphism, the second one accepts B and all its subclasses, which is the same as the first one.
Same goes for your second error case - it's another case of subtype polymorphism. You can pass the new Third because Third is actually a Second (note that I'm using the "is-a" relationship between subclass and superclass taken from object-oriented notation).
In case you're wondering why do we even need upper bounds (isn't subtype polymorphism enough?), observe this example:
def foo1[A <: AnyRef](xs: A) = xs
def foo2(xs: AnyRef) = xs
val res1 = foo1("something") // res1 is a String
val res2 = foo2("something") // res2 is an Anyref
Now we do observe the difference. Even though subtype polymorphism will allow us to pass in a String in both cases, only method foo1 can reference the type of its argument (in our case a String). Method foo2 will happily take a String, but will not really know that it's a String. So, upper bounds can come in handy when you want to preserve the type (in your case you just print out the value so you don't really care about the type - all types have a toString method).
EDIT:
(extra details, you may already know this but I'll put it for completeness)
There are more uses of upper bounds then what I described here, but when parameterizing a method this is the most common scenario. When parameterizing a class, then you can use upper bounds to describe covariance and lower bounds to describe contravariance. For example,
class SomeClass[U] {
def someMethod(foo: Foo[_ <: U]) = ???
}
says that parameter foo of method someMethod is covariant in its type. How's that? Well, normally (that is, without tweaking variance), subtype polymorphism wouldn't allow us to pass a Foo parameterized with a subtype of its type parameter. If T <: U, that doesn't mean that Foo[T] <: Foo[U]. We say that Foo is invariant in its type. But we just tweaked the method to accept Foo parameterized with U or any of its subtypes. Now that is effectively covariance. So, as long as someMethod is concerned - if some type T is a subtype of U, then Foo[T] is a subtype of Foo[U]. Great, we achieved covariance. But note that I said "as long as someMethod is concerned". Foo is covariant in its type in this method, but in others it may be invariant or contravariant.
This kind of variance declaration is called use-site variance because we declare the variance of a type at the point of its usage (here it's used as a method parameter type of someMethod). This is the only kind of variance declaration in, say, Java. When using use-site variance, you have watch out for the get-put principle (google it). Basically this principle says that we can only get stuff from covariant classes (we can't put) and vice versa for contravariant classes (we can put but can't get). In our case, we can demonstrate it like this:
class Foo[T] { def put(t: T): Unit = println("I put some T") }
def someMethod(foo: Foo[_ <: String]) = foo.put("asd") // won't compile
def someMethod2(foo: Foo[_ >: String]) = foo.put("asd")
More generally, we can only use covariant types as return types and contravariant types as parameter types.
Now, use-site declaration is nice, but in Scala it's much more common to take advantage of declaration-site variance (something Java doesn't have). This means that we would describe the variance of Foo's generic type at the point of defining Foo. We would simply say class Foo[+T]. Now we don't need to use bounds when writing methods that work with Foo; we proclaimed Foo to be permanently covariant in its type, in every use case and every scenario.
For more details about variance in Scala feel free to check out my blog post on this topic.

Why is the Aux technique required for type-level computations?

I'm pretty sure I'm missing something here, since I'm pretty new to Shapeless and I'm learning, but when is the Aux technique actually required? I see that it is used to expose a type statement by raising it up into the signature of another "companion" type definition.
trait F[A] { type R; def value: R }
object F { type Aux[A,RR] = F[A] { type R = RR } }
but isn't this nearly equivalent to just putting R in the type signature of F ?
trait F[A,R] { def value: R }
implicit def fint = new F[Int,Long] { val value = 1L }
implicit def ffloat = new F[Float,Double] { val value = 2.0D }
def f[T,R](t:T)(implicit f: F[T,R]): R = f.value
f(100) // res4: Long = 1L
f(100.0f) // res5: Double = 2.0
I see that path-dependent type would bring benefits if one could use them in argument lists, but we know we can't do
def g[T](t:T)(implicit f: F[T], r: Blah[f.R]) ...
thus, we are still forced to put an additional type parameter in the signature of g. By using the Aux technique, we are also required to spend additional time writing the companion object. From a usage standpoint, it would look to a naive user like me that there is no benefit in using path-dependent types at all.
There is only one case I can think of, that is, for a given type-level computation more than one type-level result is returned, and you may want to use only one of them.
I guess it all boils down to me overlooking something in my simple example.
There are two separate questions here:
Why does Shapeless use type members instead of type parameters in some cases in some type classes?
Why does Shapeless include Aux type aliases in the companion objects of these type classes?
I'll start with the second question because the answer is more straightforward: the Aux type aliases are entirely a syntactic convenience. You don't ever have to use them. For example, suppose we want to write a method that will only compile when called with two hlists that have the same length:
import shapeless._, ops.hlist.Length
def sameLength[A <: HList, B <: HList, N <: Nat](a: A, b: B)(implicit
al: Length.Aux[A, N],
bl: Length.Aux[B, N]
) = ()
The Length type class has one type parameter (for the HList type) and one type member (for the Nat). The Length.Aux syntax makes it relatively easy to refer to the Nat type member in the implicit parameter list, but it's just a convenience—the following is exactly equivalent:
def sameLength[A <: HList, B <: HList, N <: Nat](a: A, b: B)(implicit
al: Length[A] { type Out = N },
bl: Length[B] { type Out = N }
) = ()
The Aux version has a couple of advantages over writing out the type refinements in this way: it's less noisy, and it doesn't require us to remember the name of the type member. These are purely ergonomic issues, though—the Aux aliases make our code a little easier to read and write, but they don't change what we can or can't do with the code in any meaningful way.
The answer to the first question is a little more complex. In many cases, including my sameLength, there's no advantage to Out being a type member instead of a type parameter. Because Scala doesn't allow multiple implicit parameter sections, we need N to be a type parameter for our method if we want to verify that the two Length instances have the same Out type. At that point, the Out on Length might as well be a type parameter (at least from our perspective as the authors of sameLength).
In other cases, though, we can take advantage of the fact that Shapeless sometimes (I'll talk about specifically where in a moment) uses type members instead of type parameters. For example, suppose we want to write a method that will return a function that will convert a specified case class type into an HList:
def converter[A](implicit gen: Generic[A]): A => gen.Repr = a => gen.to(a)
Now we can use it like this:
case class Foo(i: Int, s: String)
val fooToHList = converter[Foo]
And we'll get a nice Foo => Int :: String :: HNil. If Generic's Repr were a type parameter instead of a type member, we'd have to write something like this instead:
// Doesn't compile
def converter[A, R](implicit gen: Generic[A, R]): A => R = a => gen.to(a)
Scala doesn't support partial application of type parameters, so every time we call this (hypothetical) method we'd have to specify both type parameters since we want to specify A:
val fooToHList = converter[Foo, Int :: String :: HNil]
This makes it basically worthless, since the whole point was to let the generic machinery figure out the representation.
In general, whenever a type is uniquely determined by a type class's other parameters, Shapeless will make it a type member instead of a type parameter. Every case class has a single generic representation, so Generic has one type parameter (for the case class type) and one type member (for the representation type); every HList has a single length, so Length has one type parameter and one type member, etc.
Making uniquely-determined types type members instead of type parameters means that if we want to use them only as path-dependent types (as in the first converter above), we can, but if we want to use them as if they were type parameters, we can always either write out the type refinement (or the syntactically nicer Aux version). If Shapeless made these types type parameters from the beginning, it wouldn't be possible to go in the opposite direction.
As a side note, this relationship between a type class's type "parameters" (I'm using quotation marks since they may not be parameters in the literal Scala sense) is called a "functional dependency" in languages like Haskell, but you shouldn't feel like you need to understand anything about functional dependencies in Haskell to get what's going on in Shapeless.