I am playing around with collections in Scala and found that mutable collections are defined as invariant and immutable collections are defined as covariant. What is the relation between variance and mutability / immutability in Scala?
class Array[T]
class List[+T]
I had found an easy explanation in SIA. Following is straight from there.
Mutable objects need to be invariant
A type parameter is invariant when it’s neither covariant nor contravariant. All Scala mutable collection classes are invariant. An example can explain why mutable objects need to be invariant. Because ListBuffer is mutable, it’s declared as invariant as follows:
final class ListBuffer[A] ...{ ... }
Because it’s declared as invariant, you can’t assign ListBuffer from one type to another. The following code will throw a compilation error:
scala> val mxs: ListBuffer[String] = ListBuffer("pants")
mxs: scala.collection.mutable.ListBuffer[String] =
ListBuffer(pants)
scala> val everything: ListBuffer[Any] = mxs
<console>:6: error: type mismatch;
found : scala.collection.mutable.ListBuffer[String]
required: scala.collection.mutable.ListBuffer[Any]
val everything: ListBuffer[Any] = mxs
Even though String is a subtype of scala.Any, Scala still doesn’t let you assign mxs to everything. To understand why, assume ListBuffer is covariant and the following code snippet works without any compilation problem:
scala> val mxs: ListBuffer[String] = ListBuffer("pants")
mxs: scala.collection.mutable.ListBuffer[String] =
ListBuffer(pants)
scala> val everything: ListBuffer[Any] = mxs
scala> everything += 1
res4: everything.type = ListBuffer(1, pants)
Can you spot the problem? Because everything is of the type Any, you can store an integer value into a collection of strings. This is a disaster waiting to happen. It’s exactly what happens to Java arrays. To avoid these kinds of problems, it’s always a good idea to make mutable objects invariant. The next question is what happens in case of an immutable object for collections. It turns out that for immutable objects, covariance isn’t a problem at all. If you replace ListBuffer with the immutable List, you can take an instance of List[String] and assign it to List[Any] with- out a problem.
scala> val xs: List[String] = List("pants")
xs: List[String] = List(pants)
scala> val everything: List[Any] = xs
everything: List[Any] = List(pants)
The only reason this assignment is safe is because List is immutable. You can add 1 to xs List, and it will return a new List of type Any.
scala> 1 :: xs
res5: List[Any] = List(1, pants)
Again, this addition is safe because the cons(::) method always returns a new List, and its type is determined by the type of elements in the List. The only type that could store an integer value and reference value is scala.Any. This is an important property to remember about type variance when dealing with mutable/ immutable objects.
The best way to understand contravariance is to see the problem that comes when it’s absent. Try to spot the problem in the following Java code example:
Object[] arr = new int[1];
arr[0] = "Hello, there!";
You end up assigning the string to an integer array. Java catches this error at runtime by throwing an ArrayStoreException. Scala stops these kinds of errors at compile time by forcing parameter types to be either contravariant or invariant.
Hope this helps.
A type can only be marked covariant if that type parameter only appears in covariant positions. Generally, this means that the class/trait/object has methods that returns values of the variant type but does not have methods with parameters of the variant type. Mutable collections always have methods with parameters of the variant type, e.g. update. Imagine what would happen if Array could be declared Array[+T]:
val as = Array[String]("a string")
// this statement won't typecheck in actual Scala
val aa: Array[AnyRef] = as
aa(0) = ("I'm a string...", "but this tuple itself isn't!")
// Tuples don't have a substring method, so this would fail at run-time
// if the compiler allowed this code to compile.
as(0).substring(0)
Related
I have function:
def listSum[T](xs :List[T])(implicit abc : Numeric[T]): T = {
xs.sum
}
val IntList: List[Int] = List (1, 2, 3, 4)
val DList: List[Double] = List (1.0, 2.0, 3, 4)
the code example above works fine, but when I change to the function below it stops working with error
could not find implicit value for parameter abc: Numeric[AnyVal]
Since AnyVal is the base type I can do the addition, can't I ?
where are all the implicits defined?
def listSum(xs :List[AnyVal])(implicit abc : Numeric[AnyVal]) = {
xs.sum
}
val AList: List[AnyVal] = List (1, 2, 3, 4)
Also this is not working , I think for the same reason .
def listSum[T](xs :List[T])(implicit abc : Numeric[T]): T = {
xs.sum
}
val BList : List[Boolean] = List(true, false)
println(listSum(BList))
There are several incorrent assumptions in your question:
AnyVal is the supertype of all primitive types and value classes. Along with AnyRef it's one of two type branches, rooted in the Any type, which is true supertype of all.
Numeric type is not covariant which means that existence of Numeric[Int] does not imply existence of Numeric[AnyVal], which is very logical, if you think for a second. Integer numbers is subdomain of Real numbers, but knowing how to multiply integers does not mean you know how to multiply reals.
Subtyping polymorphism is not directly related to restricted parametric polymorphism. First is mostly handled runtime while second is generally for compile-time. They interact in very specific ways in scala.
There is no single place where implicit values are defined. There are multiple places for each specific type and place in code.
All the implicit definitions for Numeric[T] are defined in scala/math/Numeric.scala
(assuming scala 2.10.4)
There is no implicit definition for Numeric[AnyVal], so this is why you get the error for your first example (this makes sense, because one cannot define numeric operations without knowing anything about the underlying type)
There is also no implicit definition for Numeric[Boolean], which is why you get the error for your second example (this also makes sense, because one cannot define the plus, minus, times, negate, etc. operation on Booleans without assuming anything about the specific domain of the project).
you can define an implicit Numeric[Boolean] yourself if that makes sense for your project, but it would be better to avoid using the implicit Numeric[T] for the Boolean type.
I'm trying to convert a type tag into a java class that maintains/persists normally-erased type parameters. There are quite a few libraries that benefit from conversions like these (such as Jackson, and Guice). I'm currently trying to migrate Manifest based code to TypeTag since Manifests are insufficient for some corner cases.
The JVM treats Arrays special in comparison to other data types. The difference between a classOf[Int] and classOf[Array[Int]] is that the method Class.isArray() will return true for the latter.
The Manifest implementation was simple. Manifest.erasure was a Class instance where isArray() was already valid/true.
The TypeTag implementation is trickier. There is no quick and easy erasure method. In fact the 'similar' TypeTag variant, RuntimeMirror.runtimeClass, prefers not to handle creating any Array based classes on our behalf. Read the documentation:
Note: If the Scala symbol is ArrayClass, a ClassNotFound exception is
thrown because there is no unique Java class corresponding to a Scala
generic array
To work around this I try to detect if it is an Array. If it is an array, then I manually create the class object. However, I've come across an additional edge case when Array has an unknown type argument.
First let me show you an example that is not a HigherKinded type.
import scala.reflect.runtime.universe._
class A[T]
val innerType = typeOf[A[Array[_]]].asInstanceOf[TypeRefApi].args.head
innerType <:< typeOf[Array[_]] // Returns true.
So far so good.
class B[T[_]]
val innerType = typeOf[B[Array]].asInstanceOf[TypeRefApi].args.head
innerType <:< typeOf[Array[_]] // Returns false.
I can't create a typeOf[Array] since it complains about the missing parameter. How can I detect that B has an type parameter of Array?
Also, what would the class instance look like in this case? Is it an Array[Object]?
Decompose again:
scala> innerType match { case TypeRef(pre, sym, args) => sym == definitions.ArrayClass }
res13: Boolean = true
That might get you part-way.
Another way is to compare typeConstructors:
import scala.reflect.runtime.universe._
class B[T[_]]
val innerType = typeOf[B[Array]].asInstanceOf[TypeRefApi].args.head
innerType.typeConstructor =:= typeOf[Array[_]].typeConstructor
innerType: reflect.runtime.universe.Type = Array
res4: Boolean = true
This also works in general when we need to detect the Type is an Array (of any type).
The try to get erased type of such innerType (to compare) fails for Array (while works for others HigherKinded types):
class B[T[_]]
val innerType = typeOf[B[Array]].typeArgs.head
innerType.typeArgs
innerType.erasure // fails
innerType.erasure =:= typeOf[Array[_]].erasure
defined class B
innerType: reflect.runtime.universe.Type = Array
res4: List[reflect.runtime.universe.Type] = List()
java.util.NoSuchElementException: head of empty list
at scala.collection.immutable.Nil$.head(List.scala:431)
at scala.collection.immutable.Nil$.head(List.scala:428)
at scala.reflect.internal.transform.Erasure$ErasureMap.apply(Erasure.scala:126)
at scala.reflect.internal.transform.Transforms$class.transformedType(Transforms.scala:43)
at scala.reflect.internal.SymbolTable.transformedType(SymbolTable.scala:16)
at scala.reflect.internal.Types$TypeApiImpl.erasure(Types.scala:225)
at scala.reflect.internal.Types$TypeApiImpl.erasure(Types.scala:218)
... 36 elided
In Scala, you can declare a variable by specifying the type, like this: (method 1)
var x : String = "Hello World"
or you can let Scala automatically detect the variable type (method 2)
var x = "Hello World"
Why would you use method 1? Does it have a performance benefit?
And once the variable has been declared, will it behave exactly the same in all situations wether it has been declared by method 1 or method 2?
Type inference is done at compile time - it's essentially the compiler figuring out what you mean, filling in the blanks, and then compiling the resulting code.
What this means is that there can be no runtime cost to type inference. The compile time cost, however, can sometimes be prohibitive and require you to explicitly annotate some of your expressions.
You will not have any performance difference using this two variants.
They will both be compiled to the same code.
The other answers assume that the compiler inferred what you think it inferred.
It is easy to demonstrate that specifying the type in a definition will set the expected type for the RHS of the definition and guide type inference.
For example, in this method that builds a collection of something, A is inferred to be Nothing, which may not be what you wanted:
scala> def build[A, B, C <: Iterable[B]](bs: B*)(implicit cbf: CanBuildFrom[A, B, C]): C = {
| val b = cbf(); println(b.getClass); b ++= bs; b.result }
build: [A, B, C <: Iterable[B]](bs: B*)(implicit cbf: scala.collection.generic.CanBuildFrom[A,B,C])C
scala> val xs = build(1,2,3)
class scala.collection.immutable.VectorBuilder
xs: scala.collection.immutable.IndexedSeq[Int] = Vector(1, 2, 3)
scala> val xs: List[Int] = build(1,2,3)
class scala.collection.mutable.ListBuffer
xs: List[Int] = List(1, 2, 3)
scala> val xs: Seq[Int] = build(1,2,3)
class scala.collection.immutable.VectorBuilder
xs: Seq[Int] = Vector(1, 2, 3)
Obviously, it matters for runtime performance whether you get a List or a Vector.
This is a lame example, but in many expressions you wouldn't notice the type of an intermediate collection unless it caused a performance problem.
Sample conversations:
https://groups.google.com/forum/#!msg/scala-language/mQ-bIXbC1zs/wgSD4Up5gYMJ
http://grokbase.com/p/gg/scala-user/137mgpjg98/another-funny-quirk
Why is Seq.newBuilder returning a ListBuffer?
https://groups.google.com/forum/#!topic/scala-user/1SjYq_qFuKk
In the simple example you gave, there is no difference in the generated byte code, and therefore no difference in performance. It would also make no noticeable difference in compilation speed.
In more complex code (likely involving implicits) you could run into cases where compile-type performance would be noticeably improved by specifying some types. However, I would completely ignore this until and unless you run into it -- specify types or not for other, better reasons.
More in line with your question, there is one very important case where it is a good idea to specify the type to ensure good run-time performance. Consider this code:
val x = new AnyRef { def sayHi() = println("Howdy!") }
x.sayHi
That code uses reflection to call sayHi, and that's a huge performance hit. Recent versions of Scala will warn you about this code for that reason, unless you have enabled the language feature for it:
warning: reflective access of structural type member method sayHi should be enabled
by making the implicit value scala.language.reflectiveCalls visible.
This can be achieved by adding the import clause 'import scala.language.reflectiveCalls'
or by setting the compiler option -language:reflectiveCalls.
See the Scala docs for value scala.language.reflectiveCalls for a discussion
why the feature should be explicitly enabled.
You might then change the code to this, which does not make use of reflection:
trait Talkative extends AnyRef { def sayHi(): Unit }
val x = new Talkative { def sayHi() = println("Howdy!") }
x.sayHi
For this reason you generally want to specify the type of the variable when you are defining classes this way; that way if you inadvertently add a method that would require reflection to call, you'll get a compilation error -- the method won't be defined for the variable's type. So while it is not the case that specifying the type makes the code run faster, it is the case that if the code would be slow, specifying the type makes it fail to compile.
val x: AnyRef = new AnyRef { def sayHi() = println("Howdy!") }
x.sayHi // ERROR: sayHi is not defined on AnyRef
There are of course other reasons why you might want to specify a type. They are required for the formal parameters of methods/functions, and for the return types of methods that are recursive or overloaded.
Also, you should always specify return types for methods in a public API (unless they are just trivially obvious), or you might end up with different method signatures than you intended, and then risk breaking existing clients of your API when you fix the signature.
You may of course want to deliberately widen a type so that you can assign other types of things to a variable later, e.g.
var shape: Shape = new Circle(1.0)
shape = new Square(1.0)
But in these cases there is no performance impact.
It is also possible that specifying a type will cause a conversion, and of course that will have whatever performance impact the conversion imposes.
I knew what type erasure is. so, i think scala REPL cannot detect a generic type exactly.
As i mentioned above, scala can't detect generic type in pattern matching like this:
case list: List[Int]
But when i declare List type value, scala detects what generic type is contained.
scala> val a = List(1,2,3)
a: List[Int] = List(1, 2, 3)
How is it possible?
val a = List(1,2,3)
This is equivalent to:
val a = List.apply[Int](1,2,3)
The result type of List.apply[Int](...) is List[Int] and therefore, type inferencer assigns this type to identifier a. This happens during compilation. The REPL does not "detect" the type in runtime.
This is different from a pattern match:
val a: Any = ...
a match {
case list: List[Int] => ...
}
Here, we have a value a, for which we don't have any type information. So we're trying to check what type it is, but now we're doing this in runtime. And here we indeed cannot determine the exact type. The best we can do here is to match against List[_].
Summarizing:
When you type some code in the REPL, it first gets compiled into bytecode and then, evaluated. Displayed type information comes from the compilation phase, so it doesn't suffer from type erasure.
When you write:
val a = List(1,2,3)
Scala uses type inference to find the closest matching type during compile time. Essentially it will rewrite it for you as:
val a: List[Int] = ...
It will use this parameter type information for compile time to type check your code and erase it afterwards so you'll get List[_] in your program. This is because JVM works this way - type erasure.
When you pattern match on a list during runtime it's type information is erased so any List would match. Scala compiler will warn you during compilation about it.
This works in the same way in REPL and in regular compile->run cycle.
C:\Users\John>scala
Welcome to Scala version 2.9.2 (Java HotSpot(TM) Client VM, Java 1.6.0_32).
Type in expressions to have them evaluated.
Type :help for more information.
scala> import scala.collection.mutable.Map
import scala.collection.mutable.Map
scala> Map()
res4: scala.collection.mutable.Map[Nothing,Nothing] = Map()
When using Map() without keyword new the apply method from the corresponding companion object will be called. But the Scala Documentation does not list an apply method for mutable Maps (only an apply method to retrieve a value from the map is provided).
Why is the code above still working ?
It looks like a bug in scaladoc. There is an apply method in object collection.mutable.Map (inherited from GenMapFactory) but it does not appear in the doc for Map. This problem seems to be fixed in the doc for upcomping 2.10.
Note : you must look into the object documentation, not the class one. The method apply in the class of course works with an existing map instance, and retrieve data from it.
There is an apply() method on the companion object of scala.collection.immutable.Map(). It is inherited from scala.collection.MapFactory . That method takes a variable number of pair arguments, and is usually used as
Map("foo"->3, "bar"->4, "barangus"->5)
Calling it with no arguments evidently works as well, but someone brighter than me would have to explain why the type inference engine comes up with scala.collection.mutable.Map[Nothing,Nothing] for it.
As sepp2k already mentioned in his comment the symbol Map refers to the companion object of Map, which gives you access to its single instance. In patter-matching this is often used to identify a message:
scala> case object Foo
defined module Foo
scala> def send[A](a: A) = a match { case Foo => "got a Foo" case Map => "got a Map" }
send: [A](a: A)String
scala> send(Map)
res8: String = got a Map
scala> send(Foo)
res9: String = got a Foo
If you write Map() you will call the apply method of object Map. Because you did not give any values to insert into the Map the compiler can't infer any type, thus it has to use the bottom type - Nothing - which is a subtype of every type. It is the only possible type to infer which will not break the type system although there are variances. Would Nothing not exist the following code would not compile:
scala> Map(1 -> 1) ++ Map()
res10: scala.collection.mutable.Map[Int,Int] = Map(1 -> 1)
If you take a look to the type signature of ++ which is as follows (source)
def ++[B1 >: B](xs: GenTraversableOnce[(A, B1)]): Map[A, B1]
you will notice the lower-bound type parameter B1 >: B. Because Nothing is a subtype of everything (and B in our case) the compiler can find a B1 (which is Int in our case) and successfully infer a type signature for our Map. This lower-bound is needed because B is covariant (source),
trait MapLike[A, +B, ...] ...
which means that we are not allowed to deliver it as a method parameter (because method parameters are in contravariant position). If the method parameters would not be in contravariant position Liskov's substitution principle would no longer kept by the type system. Thus to get the code to compile a new type (here called B1) has to be found.
As Didier Dupont already pointed out there are some bugs in Scaladoc 2.9, which are solved in 2.10. Not only some missed methods are displayed there but also methods added by an implicit conversion can be displayed (Array for example does display a lot of methods in 2.10 which are not displayed in 2.9).