I am learning Scala in order to use it for a project.
One thing I want to get a deeper understanding of is the type system, as it is something I have never used before in my other projects.
Suppose I have set up the following code:
// priority implicits
sealed trait Stringifier[T] {
def stringify(lst: List[T]): String
}
trait Int_Stringifier {
implicit object IntStringifier extends Stringifier[Int] {
def stringify(lst: List[Int]): String = lst.toString()
}
}
object Double_Stringifier extends Int_Stringifier {
implicit object DoubleStringifier extends Stringifier[Double] {
def stringify(lst: List[Double]): String = lst.toString()
}
}
import Double_Stringifier._
object Example extends App {
trait Animal[T0] {
def incrementAge(): Animal[T0]
}
case class Food[T0: Stringifier]() {
def getCalories = 100
}
case class Dog[T0: Stringifier]
(age: Int = 0, food: Food[T0] = Food()) extends Animal[String] {
def incrementAge(): Dog[T0] = this.copy(age = age + 1)
}
}
So in the example, there is a type error:
ambiguous implicit values:
[error] both object DoubleStringifier in object Double_Stringifier of type Double_Stringifier.DoubleStringifier.type
[error] and value evidence$2 of type Stringifier[T0]
[error] match expected type Stringifier[T0]
[error] (age: Int = 0, food: Food[T0] = Food()) extends Animal[String]
Ok fair enough. But if I remove the context bound, this code compiles. I.e. if I change the code for '''Dog''' to:
case class Dog[T0]
(age: Int = 0, food: Food[T0] = Food()) extends Animal[String] {
def incrementAge(): Dog[T0] = this.copy(age = age + 1)
}
Now I assumed that this would also not compile, because this type is more generic, so more ambiguous, but it does.
What is going on here? I understand that when I put the context bound, the compiler doesn't know whether it is a double or an int. But why then would an even more generic type compile? Surely if there is no context bound, I could potentially have a Dog[String] etc, which should also confuse the compiler.
From this answer: "A context bound describes an implicit value, instead of view bound's implicit conversion. It is used to declare that for some type A, there is an implicit value of type B[A] available"
Now I assumed that this would also not compile, because this type is more generic, so more ambiguous, but it does.
The ambiguity was between implicits. Both
Double_Stringifier.DoubleStringifier
and anonymous evidence of Dog[T0: Stringifier] (because class Dog[T0: Stringifier](...) is desugared to class Dog[T0](...)(implicit ev: Stringifier[T0])) were the candidates.
(Int_Stringifier#IntStringifier was irrelevant because it has lower priority).
Now you removed the context bound and only one candidate for implicit parameter in Food() remains, so there's no ambiguity. I can't see how the type being more generic is relevant. More generic doesn't mean more ambiguous. Either you have ambiguity between implicits or not.
Actually if you remove import but keep context bound the anonymous evidence is not seen in default values. So it counts for ambiguity but doesn't count when is alone :)
Scala 2.13.2, 2.13.3.
It seems to me (and if I'm wrong I'm hoping #DmytroMitin will correct me), the key to understanding this is with the default value supplied for the food parameter, which makes class Dog both a definition site, requiring an implicit be available at the call site, as well as a call site, requiring an implicit must be in scope at compile time.
The import earlier in the code supplies the implicit required for the Food() call site, but the Dog constructor requires an implicit, placed in ev, from its call site. Thus the ambiguity.
Related
I'm trying to port parts of a Haskell library for datatype-generic programming to Scala. Here's the problem I've run into:
I've defined a trait, Generic, with some container-type parameter:
trait Generic[G[_]] {
// Some function declarations go here
}
Now I have an abstract class, Collect, with three type parameters, and a function declaration (it signifies a type than can collect all subvalues of type B into a container of type F[_] from some structure of type A):
abstract class Collect[F[_],B,A] {
def collect_ : A => F[B]
}
In order to make it extend Generic, the first two type parameters F[_] and B are given, and A is curried (this effect is simulated using type lambdas):
class CollectC[F[_],B] extends Generic[({type C[A] = Collect[F,B,A]})#C] {
// Function definitions go here
}
The problem is that I need the last class definition to be implicit, because later on in my code I will need to be able to write functions like
class GUnit[G[_]](implicit gg: Generic[G]) {
// Some definitions
}
When I simply prepend implicit to the class definition, I get the an error saying implicit classes must accept exactly one primary constructor parameter. Has anyone encountered a similar problem? Is there a known way to work around it? I don't currently see how I could refactor my code while keeping the same functionality, so any advice is welcome. Thanks in advance!
Implicit classes don't work that way. They are a shorthand for implicit conversions. For instance implicit class Foo(i: Int) is equal to class Foo(i: Int); implicit def Foo(i: Int) = new Foo(i). So it only works with classes that have exactly one parameter in their constructor. It would not make sense for most 0 parameter (type-)classes.
The title of your question also seems to suggest that you think the compilation error is talking about type parameters of the type constructor, but I hope the above paragraph also makes clear that it is actually talking about value parameters of the value constructor.
For what (I think) you are trying to do, you will have to provide an implicit instance of CollectC yourself. I suggest putting it in the companion object of Collect. But you can choose an alternative solution if that fits your needs better.
scala> :paste
// Entering paste mode (ctrl-D to finish)
trait Generic[G[_]] {
// Some function declarations go here
}
abstract class Collect[F[_],B,A] {
def collect_ : A => F[B]
}
object Collect {
implicit def mkCollectC[F[_],B]: CollectC[F,B] = new CollectC[F,B]
}
class CollectC[F[_],B] extends Generic[({type C[A] = Collect[F,B,A]})#C] {
// Function definitions go here
}
// Exiting paste mode, now interpreting.
warning: there were four feature warnings; for details, enable `:setting -feature' or `:replay -feature'
defined trait Generic
defined class Collect
defined object Collect
defined class CollectC
scala> implicitly[Generic[({type C[X] = Collect[List,Int,X]})#C]]
res0: Generic[[X]Collect[[+A]List[A],Int,X]] = CollectC#12e8fb82
In scala, the shit can hit the fan if the caller of a generic method omits to explicitly specify the type parameter. For example:
class Expression[+T] // Will have eval():T method, so +T
class NothingTest {
def makey[T](): Expression[T] = null
def needsBool(b: Expression[Boolean]): Unit = {}
var b: Expression[Boolean] = null
var n = makey() // : Expression[Nothing]
b=n // Yikes.
needsBool(n) // :-/ Supplied Expression[Nothing] ... not a Expression[Nothing]
}
I'm supposed to supply a type parameter to makey() (e.g. makey[Boolean]() ), however in this instance I forgot, the program compiled (which, by the way, is extremely easy to do).
The program will eventually fail in needsBool (implementation omitted) which did not receive an Expression[Booolean] object - it got an Expression[Nothing] object instead. Scala's docs says Nothing is a subclass of all types, which seems exceptionally rude and is certain to break type safety wherever it appears.
So, to reintroduce some type-safety, can I either:
prevent makey from returning Expression[Nothing] but requiring that a type parameter be provided? (I suspect not), OR
prevent needsBool from receiving an Expression[Nothing]?
at compile-time.
Update:
A fuller (compiling, but runtime failing example):
class Expression[+T](val value:T){
def eval:T = value
}
class NothingTest {
def makey[T](): Expression[T] = new Expression[String]("blah").asInstanceOf[Expression[T]]
def needsBool(b: Expression[Boolean]): Unit = {
val boolval = b.eval // Explode! String is not a Boolean
println(boolval)
}
var b: Expression[Boolean] = null
var n = makey() // : Expression[Nothing]. You're suppose to supply a type, but forgot.
b=n // Yikes.
needsBool(n) // :-/ Supplied Expression[Nothing]
}
I've found a somewhat hacky solution, but it works.
Create a NotNothing type that's contravariant in its type parameter, then provide an implicit object for both Any and Nothing.
Now if you try to use a value of NotNothing with Nothing the compiler will complain about ambiguity. Case in point:
sealed trait NotNothing[-T]
object NotNothing {
implicit object YoureSupposedToSupplyAType extends NotNothing[Nothing]
implicit object notNothing extends NotNothing[Any]
}
Then constrain your makey function with the NotNothing type:
def makey[T : NotNothing]() = { ... }
And voila now you'll get a compile time error if you forget to supply a type!
In the following example, is there a way to avoid that implicit resolution picks the defaultInstance and uses the intInstance instead? More background after the code:
// the following part is an external fixed API
trait TypeCls[A] {
def foo: String
}
object TypeCls {
def foo[A](implicit x: TypeCls[A]) = x.foo
implicit def defaultInstance[A]: TypeCls[A] = new TypeCls[A] {
def foo = "default"
}
implicit val intInstance: TypeCls[Int] = new TypeCls[Int] {
def foo = "integer"
}
}
trait FooM {
type A
def foo: String = implicitly[TypeCls[A]].foo
}
// end of external fixed API
class FooP[A:TypeCls] { // with type params, we can use context bound
def foo: String = implicitly[TypeCls[A]].foo
}
class MyFooP extends FooP[Int]
class MyFooM extends FooM { type A = Int }
object Main extends App {
println(s"With type parameter: ${(new MyFooP).foo}")
println(s"With type member: ${(new MyFooM).foo}")
}
Actual output:
With type parameter: integer
With type member: default
Desired output:
With type parameter: integer
With type member: integer
I am working with a third-party library that uses the above scheme to provide "default" instances for the type class TypeCls. I think the above code is a minimal example that demonstrates my problem.
Users are supposed to mix in the FooM trait and instantiate the abstract type member A. The problem is that due to the defaultInstance the call of (new MyFooM).foo does not resolve the specialized intInstance and instead commits to defaultInstance which is not what I want.
I added an alternative version using type parameters, called FooP (P = Parameter, M = Member) which avoids to resolve the defaultInstance by using a context bound on the type parameter.
Is there an equivalent way to do this with type members?
EDIT: I have an error in my simplification, actually the foo is not a def but a val, so it is not possible to add an implicit parameter. So no of the current answers are applicable.
trait FooM {
type A
val foo: String = implicitly[TypeCls[A]].foo
}
// end of external fixed API
class FooP[A:TypeCls] { // with type params, we can use context bound
val foo: String = implicitly[TypeCls[A]].foo
}
The simplest solution in this specific case is have foo itself require an implicit instance of TypeCls[A].
The only downside is that it will be passed on every call to foo as opposed to just when instantiating
FooM. So you'll have to make sure they are in scope on every call to foo. Though as long as the TypeCls instances are in the companion object, you won't have anything special to do.
trait FooM {
type A
def foo(implicit e: TypeCls[A]): String = e.foo
}
UPDATE: In my above answer I managed to miss the fact that FooM cannot be modified. In addition the latest edit to the question mentions that FooM.foo is actually a val and not a def.
Well the bad news is that the API you're using is simply broken. There is no way FooM.foo wille ever return anything useful (it will always resolve TypeCls[A] to TypeCls.defaultInstance regardless of the actual value of A). The only way out is to override foo in a derived class where the actual value of A is known, in order to be able to use the proper instance of TypeCls. Fortunately, this idea can be combined with your original workaround of using a class with a context bound (FooP in your case):
class FooMEx[T:TypeCls] extends FooM {
type A = T
override val foo: String = implicitly[TypeCls[A]].foo
}
Now instead of having your classes extend FooM directly, have them extend FooMEx:
class MyFoo extends FooMEx[Int]
The only difference between FooMEx and your original FooP class is that FooMEx does extend FooM, so MyFoo is a proper instance of FooM and can thus be used with the fixed API.
Can you copy the code from the third party library. Overriding the method does the trick.
class MyFooM extends FooM { type A = Int
override def foo: String = implicitly[TypeCls[A]].foo}
It is a hack, but I doubt there is anything better.
I do not know why this works the way it does. It must be some order in which the type alias are substituted in the implicitly expression.
Only an expert in the language specification can tell you the exact reason.
I have something like this in scala:
abstract class Point[Type](n: String){
val name = n
var value: Type = _
}
So far so good. The problem comes in a class that extends Point.
case class Input[Type](n:String) extends Point(n){
def setValue(va: Type) = value = va
}
On the setValue line I have this problem:
[error] type mismatch;
[error] found : va.type (with underlying type Type)
[error] required: Nothing
[error] def setValue(va: Type) = value = va
I have tried to initialize with null and null.asInstanceOf[Type] but the result is the same.
How can I initialize value so it can be used in setValue?
You should specify that Input implements Point with the generic type Type because for now, as it is not specified, it is considered as Nothing (I guess the compiler can't infer it from the setValue method). So you have to do the following:
case class Input[Type](n:String) extends Point[Type](n){
def setValue(va: Type) = value = va
}
More information
I answered this question for the compilation error (it does compile on scala 2.9.0.1). Moreover I saw this case class as the implementation for an existing type, like 'Int'. The usage of _ is of course a bad idea in the abstract class, however it is not prohibited, but the _ is not always a null, it is the default value, for exemple: var x:Int = _ will assign the value 0 to x.
Try the following:
package inputabstraction
abstract class Point[T](n:String){
def value: T
val name = n
}
case class Input[T](n:String, value:T) extends Point[T](n)
object testTypedCaseClass{
def test(){
val foo = Input("foo", "bar")
println(foo)
}
}
A simple Application to check that it works:
import inputabstraction._
object TestApp extends Application{
testTypedCaseClass.test()
}
Explanation
The first mistake you are making is case class Input[Type](n:String) extends Point(n){. Point is a typed class, and so when you are calling the superclass constructor with extends Point(n) you need to specify the type of Point. This is done like this: extends Point[T](n), where T is the Type you are planning to use.
The second error is that you are both defining and declaring value:T here: var value: Type = _. In this statement, _ is a value. Its value is Nothing. The scala compiler infers from this that Point[T] is Point[Nothing]. Thus when you attempt to set it to a type in the body of your setValue method, you must set it to Nothing, which is probably not what you want. If you attempt to set it to anything besides Nothing, you will get the type mismatch from above, because value is typed as Nothing due to your use of _.
The third mistake is using var instead of val or def. val and def can be overridden interchangeably, which means that subtypes can override with either val or def, and the scala compiler will figure it out for you. It is best practice to define vals as functions using def in abstract classes and traits, because the initialization order of subtype constructors is a very difficult thing to get right (there is an algorithm for how the compiler decides how to construct a class from its supertypes). TL#DR === use def in supertypes. Case class parameters are automatically generate val fields, which, since you are extending a Point, will create a val value field that overrides the def value field in Point[T].
You can get away with all this Type||T abstraction in Scala because of type inference and the fact that Point is abstract, therefore making value extendable via val.
The preferred way of doing dependency injection like this is the cake pattern, but this example I have provided works for your use-case.
Suppose we have implicit parameter lookup concerning only local scopes:
trait CanFoo[A] {
def foos(x: A): String
}
object Def {
implicit object ImportIntFoo extends CanFoo[Int] {
def foos(x: Int) = "ImportIntFoo:" + x.toString
}
}
object Main {
def test(): String = {
implicit object LocalIntFoo extends CanFoo[Int] {
def foos(x: Int) = "LocalIntFoo:" + x.toString
}
import Def._
foo(1)
}
def foo[A:CanFoo](x: A): String = implicitly[CanFoo[A]].foos(x)
}
In the above code, LocalIntFoo wins over ImportedIntFoo.
Could someone explain how it's considered more specific using "the rules of static overloading resolution (§6.26.3)"?
Edit:
The name binding precedence is a compelling argument, but there are several issues unresolved.
First, Scala Language Reference says:
If there are several eligible arguments which match the implicit parameter’s type, a most specific one will be chosen using the rules of static overloading resolution (§6.26.3).
Second, name binding precedence is about resolving a known identifier x to a particular member pkg.A.B.x in case there are several variable/method/object named x in the scope. ImportIntFoo and LocalIntFoo are not named the same.
Third, I can show that name binding precedence alone is not in play as follows:
trait CanFoo[A] {
def foos(x: A): String
}
object Def {
implicit object ImportIntFoo extends CanFoo[Int] {
def foos(x: Int) = "ImportIntFoo:" + x.toString
}
}
object Main {
def test(): String = {
implicit object LocalAnyFoo extends CanFoo[Any] {
def foos(x: Any) = "LocalAnyFoo:" + x.toString
}
// implicit object LocalIntFoo extends CanFoo[Int] {
// def foos(x: Int) = "LocalIntFoo:" + x.toString
// }
import Def._
foo(1)
}
def foo[A:CanFoo](x: A): String = implicitly[CanFoo[A]].foos(x)
}
println(Main.test)
Put this in test.scala and run scala test.scala, and it prints out ImportIntFoo:1.
This is because static overloading resolution (§6.26.3) says more specific type wins.
If we are pretending that all eligible implicit values are named the same, LocalAnyFoo should have masked ImportIntFoo.
Related:
Where does Scala look for implicits?
This is a great summary of implicit parameter resolution, but it quotes Josh's nescala presentation instead of the spec. His talk is what motivated me to look into this.
Compiler Implementation
rankImplicits
I wrote my own answer in the form of a blog post revisiting implicits without import tax.
Update: Furthermore, the comments from Martin Odersky in the above post revealed that the Scala 2.9.1's behavior of LocalIntFoo winning over ImportedIntFoo is in fact a bug. See implicit parameter precedence again.
1) implicits visible to current invocation scope via local declaration, imports, outer scope, inheritance, package object that are accessible without prefix.
2) implicit scope, which contains all sort of companion objects and package object that bear some relation to the implicit's type which we search for (i.e. package object of the type, companion object of the type itself, of its type constructor if any, of its parameters if any, and also of its supertype and supertraits).
If at either stage we find more than one implicit, static overloading rule is used to resolve it.
Update 2: When I asked Josh about Implicits without Import Tax, he explained to me that he was referring to name binding rules for implicits that are named exactly the same.
From http://www.scala-lang.org/docu/files/ScalaReference.pdf, Chapter 2:
Names in Scala identify types, values, methods, and classes which are
collectively called entities. Names are introduced by local definitions
and declarations (§4), inheritance (§5.1.3), import clauses (§4.7), or
package clauses (§9.2) which are collectively called bindings.
Bindings of different kinds have a precedence defined on them:
1. Definitions and declarations that are local, inherited, or made available by a package clause in the same compilation unit where the
definition occurs have highest precedence.
2. Explicit imports have next highest precedence.
3. Wildcard imports have next highest precedence.
4. Definitions made available by a package clause not in the compilation unit where the definition occurs have lowest precedence.
I may be mistaken, but the call to foo(1) is in the same compilation unit as LocalIntFoo, resulting in that conversion taking precedence over ImportedIntFoo.
Could someone explain how it's considered more specific using "the
rules of static overloading resolution (§6.26.3)"?
There's no method overload, so 6.26.3 is utterly irrelevant here.
Overload refers to multiple methods with the same name but different parameters being defined on the same class. For example, method f in the example 6.26.1 is overloaded:
class A extends B {}
def f(x: B, y: B) = . . .
def f(x: A, y: B) = . . .
val a: A
val b: B
Implicit parameter resolution precedence is a completely different rule, and one which has a question and answer already on Stack Overflow.