Scala Loses Generic Type of Parameter - scala

In over my head dealing with a tricky covariant type used in an inherited trait's overridden function. The basic question is, what is the [?] type? I can't find a good definition (it's kinda ungooglable), and so it's unclear why a [T] gets replaced with [?] in the following example:
> sealed trait Bar[+T]
> trait FooT { type Other; def foo[T,V](stuff:Bar[T]*) { stuff.headOption.isDefined } }
> trait TooF extends FooT { override def foo[T,V](stuff:Bar[T]*) { super.foo(stuff) } }
<console>:7: error: type mismatch;
found : Bar[T]*
required: Bar[?]
trait TooF extends FooT { override def foo[T,V](stuff:Bar[T]*) { super.foo(stuff) } }

I'm not sure of the exact reason that it shows Bar[?] but I think it probably something like the type parameter hasn't been resolved yet. The real problem is that your syntax for passing the varargs on to the super method is incorrect. It should be
override def foo[T,V](stuff:Bar[T]*) { super.foo(stuff:_*) }

Related

Context Bound on a Generic Class Using Implicits

I am learning Scala in order to use it for a project.
One thing I want to get a deeper understanding of is the type system, as it is something I have never used before in my other projects.
Suppose I have set up the following code:
// priority implicits
sealed trait Stringifier[T] {
def stringify(lst: List[T]): String
}
trait Int_Stringifier {
implicit object IntStringifier extends Stringifier[Int] {
def stringify(lst: List[Int]): String = lst.toString()
}
}
object Double_Stringifier extends Int_Stringifier {
implicit object DoubleStringifier extends Stringifier[Double] {
def stringify(lst: List[Double]): String = lst.toString()
}
}
import Double_Stringifier._
object Example extends App {
trait Animal[T0] {
def incrementAge(): Animal[T0]
}
case class Food[T0: Stringifier]() {
def getCalories = 100
}
case class Dog[T0: Stringifier]
(age: Int = 0, food: Food[T0] = Food()) extends Animal[String] {
def incrementAge(): Dog[T0] = this.copy(age = age + 1)
}
}
So in the example, there is a type error:
ambiguous implicit values:
[error] both object DoubleStringifier in object Double_Stringifier of type Double_Stringifier.DoubleStringifier.type
[error] and value evidence$2 of type Stringifier[T0]
[error] match expected type Stringifier[T0]
[error] (age: Int = 0, food: Food[T0] = Food()) extends Animal[String]
Ok fair enough. But if I remove the context bound, this code compiles. I.e. if I change the code for '''Dog''' to:
case class Dog[T0]
(age: Int = 0, food: Food[T0] = Food()) extends Animal[String] {
def incrementAge(): Dog[T0] = this.copy(age = age + 1)
}
Now I assumed that this would also not compile, because this type is more generic, so more ambiguous, but it does.
What is going on here? I understand that when I put the context bound, the compiler doesn't know whether it is a double or an int. But why then would an even more generic type compile? Surely if there is no context bound, I could potentially have a Dog[String] etc, which should also confuse the compiler.
From this answer: "A context bound describes an implicit value, instead of view bound's implicit conversion. It is used to declare that for some type A, there is an implicit value of type B[A] available"
Now I assumed that this would also not compile, because this type is more generic, so more ambiguous, but it does.
The ambiguity was between implicits. Both
Double_Stringifier.DoubleStringifier
and anonymous evidence of Dog[T0: Stringifier] (because class Dog[T0: Stringifier](...) is desugared to class Dog[T0](...)(implicit ev: Stringifier[T0])) were the candidates.
(Int_Stringifier#IntStringifier was irrelevant because it has lower priority).
Now you removed the context bound and only one candidate for implicit parameter in Food() remains, so there's no ambiguity. I can't see how the type being more generic is relevant. More generic doesn't mean more ambiguous. Either you have ambiguity between implicits or not.
Actually if you remove import but keep context bound the anonymous evidence is not seen in default values. So it counts for ambiguity but doesn't count when is alone :)
Scala 2.13.2, 2.13.3.
It seems to me (and if I'm wrong I'm hoping #DmytroMitin will correct me), the key to understanding this is with the default value supplied for the food parameter, which makes class Dog both a definition site, requiring an implicit be available at the call site, as well as a call site, requiring an implicit must be in scope at compile time.
The import earlier in the code supplies the implicit required for the Food() call site, but the Dog constructor requires an implicit, placed in ev, from its call site. Thus the ambiguity.

Strange Scala compiler error when removing a call to a function that has Unit return type, how is this even possible?

Here is a strange situation:
If I comment out the call to feed_usingExplicitTypeClassInstance below, then I get a compiler error.
Very puzzling. Any explanation ?
I mean, I comment out a function call (which returns no value) and then the code does not compile anymore ?
Should this be even possible at all in theory ? In any programming language ?
I mean I comment out something like println("hello") and then the code does not compile anymore ?
Of course it would be understandable if I would comment out a declaration or something, but a call to a function that does not return anything ?
object AnimalFeeder extends App {
def feed_usingExplicitTypeClassInstance[AnimalInstance]
(animalTypeClass: AnimalTypeClass[AnimalInstance])
(food: animalTypeClass.FoodThatAnimalLikes) =
{
animalTypeClass.feed(food)
}
def feed_usingImplicitTypeClassInstance[AnimalInstance, Food]
(food: Food)
(implicit animalTypeClass: AnimalTypeClass.Aux[Food,AnimalInstance]) =
{
animalTypeClass.feed(food)
}
// If I comment out this line, THEN !, I get an error !!!! How ???
feed_usingExplicitTypeClassInstance(AnimalTypeClass.CatInstance)(new CatFood())
feed_usingImplicitTypeClassInstance(new CatFood)
}
trait Food {
def eat(): Unit
}
trait AnimalTypeClass[AnimalInstance] {
type FoodThatAnimalLikes <: Food
def feed(f: FoodThatAnimalLikes) = f.eat()
}
object AnimalTypeClass {
type Aux[Food, Animal] = AnimalTypeClass[Animal] {
type FoodThatAnimalLikes = Food
}
implicit object CatInstance extends AnimalTypeClass[Cat] {
override type FoodThatAnimalLikes = CatFood
}
}
trait Cat
class CatFood extends Food {
override def eat(): Unit = println("meow")
}
This is the error:
Error:(23, 38) could not find implicit value for parameter animalTypeClass: AnimalTypeClass.Aux[CatFood,AnimalInstance]
feed_usingImplicitTypeClassInstance(new CatFood)
Error:(23, 38) not enough arguments for method feed_usingImplicitTypeClassInstance: (implicit animalTypeClass: AnimalTypeClass.Aux[CatFood,AnimalInstance])Unit.
Unspecified value parameter animalTypeClass.
feed_usingImplicitTypeClassInstance(new CatFood)
EDIT:
If I insert the line:
AnimalTypeClass.CatInstance
before:
feed_usingImplicitTypeClassInstance(new CatFood)
then the code compiles again, even if the line
feed_usingExplicitTypeClassInstance(AnimalTypeClass.CatInstance)(new CatFood())
is commented out.
This is a pretty well known issue, where implicits which appear after their usage in the same file and without an explicit type annotation are not found. For that reason it is strongly advised (and this will eventually be enforced) to give all non-local implicits an explicit type annotation. Unfortunately implicit objects are a bit tricky here, because they always act like implicit definitions without type annotation, and it is impossible to give them an explicit type... However last I checked this seemed to be fixed in Dotty for implicit objects.
See also, among others https://github.com/scala/bug/issues/8697
The reason that it does work when you uncomment a call to AnimalTypeClass.CatInstance in your code is that that reference will force the implicit object to be type checked earlier, so its type will be known before its implicit usage.
You have the definition of the implicit value in the same file after the usage of this value. It is not initialized when the compiler looks for an implicit value when you call feed_usingImplicitTypeClassInstance. Calling feed_usingExplicitTypeClassInstance with an explicit reference to this implicit value forces the implicit to initialize, and the compiler can use it in the implicit call.
Possible solutions:
Move the definition of the implicit value to another file.
If the implicit value is in the same file, move its definition above the place where you use it implicitly.

Scala: buggy type inference?

The following is a simplified version of my original code to make things simpler (sorry it's still a bit complicated):
trait BuilderBase
trait MessageBase {
type Builder <: BuilderBase
}
class SomeMessage extends MessageBase {
type Builder = SomeMessage.Builder
}
object SomeMessage {
class Builder extends BuilderBase
}
class Covariant[+T]
class NonCovariant[T]
def func[T <: MessageBase](value: Covariant[T]): Covariant[T#Builder] = null
val message: Covariant[SomeMessage] = null
val result: Covariant[SomeMessage.Builder] = func(message)
And the last line fails to compile with an error at func(→ message ← here):
type mismatch; found : Covariant[SomeMessage] required: Covariant[SomeMessage.type]
Definitely func takes parameter of Covariant of T that is subclass of MessageBase, what's required there is Covariant[SomeMessage] not Covariant[SomeMessage.type], because SomeMessage.type (type of the companion object SomeMessage) does not conform to MessageBase.
Strangely, the error goes away without the type annotation, say, val result = func(message), and the type of result is exactly the same as what's meant: Covariant[SomeMessage.Builder]. So it just fails with the correct type annotation. Is this a bug?
One more clue is that this doesn't happen with all Covariant replaced with NonConvariant. So it might somehow be related to covariance. Any suggestion or help will be appreciated.
I know that some little tweaks can be a walkaround for this specific problem here e.g. simply omitting the type annotation might be one of them. But it would be really helpful if I can get more clues on what's really going on in compiler for example by giving some command line options to it.
I don't know the reason why also, but to give a slight alternative to #AssafMendelson 's answer, the following works too:
trait BuilderBase
trait MessageBase {
type Builder <: BuilderBase
}
class SomeMessage extends MessageBase {
type Builder = BuilderBase {}
}
class Covariant[+T]
class NonCovariant[T]
def func[T <: MessageBase](value: Covariant[T]): Covariant[T#Builder] = null
val message: Covariant[SomeMessage] = null
val result: Covariant[SomeMessage#Builder] = func(message)
So basically instead of the class definition in the companion object, simply define it in the class
I believe the issue is that the compiler gets confused by the type parameter Builder and the class Builder
Basically SomeMessage.Builder can reference both the type inherited and the object, however, it first tries the instance and only then the object.
I tried a simple change: I changed the class builder to builder2 (and the relevant references to it) and it seems to work.

Collection structural type parameter weirdness

This seems like a simple thing but I can't understand it...
This compiles:
object CanFoo1 {
def foo(): Unit = {
println("Yup, I can foo alright")
}
}
object CanFoo2 {
def foo(): Unit = {
println("And I can foo with the best")
}
}
trait A {
type CanFoo = { def foo(): Unit }
def fooers: Seq[CanFoo]
}
class B extends A {
def fooers = Seq(
// CanFoo1, // <- won't compile when this is uncommented
CanFoo2
)
}
But uncommenting the // CanFoo1, line gives:
error: type mismatch;
found : Seq[Object]
required: Seq[B.this.CanFoo]
(which expands to) Seq[AnyRef{def foo(): Unit}]
def fooers = Seq(
^
one error found
So it seems like the compiler understands that a collection containing just one element Seq(CanFoo2) (or Seq(CanFoo1)) is of the correct type, but when both objects are in the collection it gives up? What am I doing wrong here?
So it seems like the compiler understands that a collection containing
just one element Seq(CanFoo2) (or Seq(CanFoo1)) is of the correct
type, but when both objects are in the collection it gives up? What am
I doing wrong here?
When you pass CanFoo1 or CanFoo2 to the Seq apply, the sequence is inferred to be of type CanFoo1.type or CanFoo2.type respectively, it is not inferred to be of type CanFoo.
When you pass in both elements to the Seq, the compiler tries to look for a common type to which it can validly infer to make the code compile, and the only type it can find is Object, but fooers is said to be of type Seq[CanFoo], so the compiler yells.
You can help the compiler a little by explicitly writing the type of the collection:
class B extends A {
def fooers = Seq[CanFoo](
CanFoo1,
CanFoo2
)
}

Scala type mismatch: required _$1 where type _$1 <:

I'm a newbie to Scala and I'm facing an issue I can't understand and solve. I have written a generic trait which is this one:
trait DistanceMeasure[P<:DbScanPoint] {
def distance(p1:P, p2:P):Double
}
where DbScanPoint is simply:
trait DbScanPoint extends Serializable {}
Then I have the following two classes extending them:
class Point2d (id:Int, x:Double, y:Double) extends DbScanPoint {
def getId() = id
def getX() = x
def getY() = y
}
class EuclideanDistance extends DistanceMeasure[Point2d] with Serializable {
override def distance(p1:Point2d,p2:Point2d) = {
(p1.getX()-p2.getX())*(p1.getX()-p2.getX()) + (p1.getY()-p2.getY()) * (p1.getY()-p2.getY())
}
}
And at the end I have this class:
class DBScanSettings {
var distanceMeasure:DistanceMeasure[_<:DbScanPoint] = new EuclideanDistance
//...
}
My problem is that if that when I write in my test main this:
val dbScanSettings = new DBScanSettings()
dbScanSettings.distanceMeasure.distance(new Point2d(1,1,1), new Point2d(2,2,2))
I get the following compiling error:
type mismatch;
[error] found : it.polito.dbdmg.ontic.point.Point2d
[error] required: _$1 where type _$1 <: it.polito.dbdmg.ontic.point.DbScanPoint
I can't understand which is the problem. I have done a very similar thing with other classes and I got no error, so the reason of this error is quite obscure to me.
May somebody help me?
Thanks.
UPDATE
I managed to do what I needed by changing the code to:
trait DistanceMeasure {
def distance(p1:DbScanPoint, p2:DbScanPoint):Double
}
And obviously making all the related changes.
The heart of your problem is that you are defining your distanceMeasure var with an existential type, so to the compiler that type is not completely known. Then, you are calling distance which is to take two instances of type P <: DbScanPoint passing in two Point2d instances. Now, these are the correct types for the concrete class behind distanceMeasure (a new EuclideanDistance), but the way you defined distanceMeasure (with an existential), the compiler cannot enforce that Point2d instances are the right type that the concrete underlying DistanceMeasure takes.
Say for arguments sake that instead of a new EuclideanDistance, you instead instantiated a completely different impl of DistanceMeasure that did not take Point2d instances and then tried to call distance he way you have it here. If the compiler can't enforce that the underlying class accepts the arguments supplied, it's going to complain like this.
There are a bunch of ways to fix this, and the solution ultimately depends on the flexibility you need in your class structure. One possible way is like so:
trait DBScanSettings[P <: DbScanPoint] {
val distanceMeasure:DistanceMeasure[P]
//...
}
class Point2dScanSettings extends DBScanSettings[Point2d]{
val distanceMeasure = new EuclideanDistance
}
And then to test:
val dbScanSettings = new Point2dScanSettings()
dbScanSettings.distanceMeasure.distance(new Point2d(1,1,1), new Point2d(2,2,2))
But without me really understanding you requirements for what levels of abstraction you need, it's going to be up to you to define the restructure.