I have a confusing problem in my project and can't quite solve it, so please help me!
Here is a sample code that simplifies my original one:
trait Sample[A] {
def doit(param: A)
}
case object SampleEx1 extends Sample[Int] {
def doit(param: Int) = {
param + 0
}
}
Now I need to make A covariance for outer reasons, but it results in an error as commented out:
trait Sample[+A] {
def doit(param: A) // ERR: covariant type A occurs in contravariant position in type A of value param
}
case object SampleEx1 extends Sample[Int] {
def doit(param: Int) = {
param + 0
}
}
So I stacoverflowed and found a solution with another type B, but then another error happens:
trait Sample[+A] {
def doit[B >: A](param: B)
}
case object SampleEx1 extends Sample[Int] {
def doit[Int](param: Int) = {
param + 0 // type mismatch; found : Int(0) required: String
}
}
Apparently param is no longer Int because of [B >: Int].
I tried solving this one with myself and with google but couldn't get it. Could anyone help? Thank you so much! :))
The first error covariant type A occurs in contravariant position in type A of value param means that if a generic type Foo declares itself to be covariant over T (i.e. Foo[+T]), it means that its methods can only return T and not require it. Otherwise type consistency will be violated. For instance you could pass in an instance of Sample[Dog] where Sample[Animal] is required, and then something could call doit(new Duck) on it, even though Sample[Dog]#doit can only handle instances of Dog. However, return values behave the exact opposite way in this context (I'll let you figure out why).
However this
def doit[Int](param: Int)
means doit has a type parameter called Int, which has nothing to do with the Int type (although it sure does seem like it does at first impression, which is why you should never use type parameter names that coincide with the names of other/built-in types). So the error you're getting is because Int in that context means "any type", and using + on any type will fall back to string concatenation as opposed to arithmetic addition.
instead you need (to correctly inherit from Sample[+A]):
def doit[B >: Int](param: B)
however, that will still not allow you to do addition on param because param is now any supertype of Int, not Int itself or a subtype thereof.
So I do not see how you can "fix" this—the way variance works fundamentally simply doesn't allow for generic types to be covariant over method parameters. This has nothing to do with Scala really. But see e.g. http://blogs.atlassian.com/2013/01/covariance-and-contravariance-in-scala/ or http://docs.scala-lang.org/tutorials/tour/variances.html for more information on how variance works and why it has to work exactly like it does (in any language implementing correct variance rules).
I think a better way in general to get a really helpful answer on Stackoverflow is to also describe what you really need to achieve, not just the implementation you've been working on so far.
Related
def toJson[T](obj: T) = {
gson.toJson(obj)
}
def toJson[T](list: Seq[T]) = {
toJson(seqAsJavaList(list))
}
This doesn't compile. And that's documented as a feature (see this answer):
When a method is overloaded and one of the methods calls another. The calling method needs a return type annotation.
The question is: why?
From the above link + some additional thought from colleagues, here are the possible reasons:
scala uses return type as well to determine overloaded methods. Is this the case, and why is it neded? (Java doesn't use return types, for example)
partial functions - if one of the methods doesn't have arguments and the other one does, toJson() may be viewed as a partial function, so it's not certain whether the return type is String or Function
I know it's a best practice to specify the return type anyone, but why actually is the above snippet not compiling, and if return type inference isn't good enough, why is it there in the first place?
Might not be the main reason, but note that another reason for an explicit parameter is given as:
When a method is recursive.
The problem is, depending on the return type of the "calling" function, the call can either be to its namesake, or to itself (i.e. recursive).
Let's say:
trait C
class A extends C
def a(obj: A) = {2}
Now consider:
def a[T <: C](obj: T): Int = {
a(obj)
}
a(new A) //an Int, 2
versus:
def a[T <: C](obj: T): Any = {
a(obj)
}
a(new A) //infinite recursion
Since inferring the return type of a recursive function is finite-time undecidable in general, inferring the return type of the "calling" function is also finite-time undecidable in general.
I'm seeing something I do not understand. I have a hierarchy of (say) Vehicles, a corresponding hierarchy of VehicalReaders, and a VehicleReader object with apply methods:
abstract class VehicleReader[T <: Vehicle] {
...
object VehicleReader {
def apply[T <: Vehicle](vehicleId: Int): VehicleReader[T] = apply(vehicleType(vehicleId))
def apply[T <: Vehicle](vehicleType VehicleType): VehicleReader[T] = vehicleType match {
case VehicleType.Car => new CarReader().asInstanceOf[VehicleReader[T]]
...
Note that when you have more than one apply method, you must specify the return type. I have no issues when there is no need to specify the return type.
The cast (.asInstanceOf[VehicleReader[T]]) is the reason for the question - without it the result is compile errors like:
type mismatch;
found : CarReader
required: VehicleReader[T]
case VehicleType.Car => new CarReader()
^
Related questions:
Why cannot the compiler see a CarReader as a VehicleReader[T]?
What is the proper type parameter and return type to use in this situation?
I suspect the root cause here is that VehicleReader is invariant on its type parameter, but making it covariant does not change the result.
I feel like this should be rather simple (i.e., this is easy to accomplish in Java with wildcards).
The problem has a very simple cause and really doesn't have anything to do with variance. Consider even more simple example:
object Example {
def gimmeAListOf[T]: List[T] = List[Int](10)
}
This snippet captures the main idea of your code. But it is incorrect:
val list = Example.gimmeAListOf[String]
What will be the type of list? We asked gimmeAListOf method specifically for List[String], however, it always returns List[Int](10). Clearly, this is an error.
So, to put it in words, when the method has a signature like method[T]: Example[T] it really declares: "for any type T you give me I will return an instance of Example[T]". Such types are sometimes called 'universally quantified', or simply 'universal'.
However, this is not your case: your function returns specific instances of VehicleReader[T] depending on the value of its parameter, e.g. CarReader (which, I presume, extends VehicleReader[Car]). Suppose I wrote something like:
class House extends Vehicle
val reader = VehicleReader[House](VehicleType.Car)
val house: House = reader.read() // Assuming there is a method VehicleReader[T].read(): T
The compiler will happily compile this, but I will get ClassCastException when this code is executed.
There are two possible fixes for this situation available. First, you can use existential (or existentially quantified) type, which can be though as a more powerful version of Java wildcards:
def apply(vehicleType: VehicleType): VehicleReader[_] = ...
Signature for this function basically reads "you give me a VehicleType and I return to you an instance of VehicleReader for some type". You will have an object of type VehicleReader[_]; you cannot say anything about type of its parameter except that this type exists, that's why such types are called existential.
def apply(vehicleType: VehicleType): VehicleReader[T] forSome {type T} = ...
This is an equivalent definition and it is probably more clear from it why these types have such properties - T type is hidden inside parameter, so you don't know anything about it but that it does exist.
But due to this property of existentials you cannot really obtain any information about real type parameters. You cannot get, say, VehicleReader[Car] out of VehicleReader[_] except via direct cast with asInstanceOf, which is dangerous, unless you store a TypeTag/ClassTag for type parameter in VehicleReader and check it before the cast. This is sometimes (in fact, most of time) unwieldy.
That's where the second option comes to the rescue. There is a clear correspondence between VehicleType and VehicleReader[T] in your code, i.e. when you have specific instance of VehicleType you definitely know concrete T in VehicleReader[T] signature:
VehicleType.Car -> CarReader (<: VehicleReader[Car])
VehicleType.Truck -> TruckReader (<: VehicleReader[Truck])
and so on.
Because of this it makes sense to add type parameter to VehicleType. In this case your method will look like
def apply[T <: Vehicle](vehicleType: VehicleType[T]): VehicleReader[T] = ...
Now input type and output type are directly connected, and the user of this method will be forced to provide a correct instance of VehicleType[T] for that T he wants. This rules out the runtime error I have mentioned earlier.
You will still need asInstanceOf cast though. To avoid casting completely you will have to move VehicleReader instantiation code (e.g. yours new CarReader()) to VehicleType, because the only place where you know real value of VehicleType[T] type parameter is where instances of this type are constructed:
sealed trait VehicleType[T <: Vehicle] {
def newReader: VehicleReader[T]
}
object VehicleType {
case object Car extends VehicleType[Car] {
def newReader = new CarReader
}
// ... and so on
}
Then VehicleReader factory method will then look very clean and be completely typesafe:
object VehicleReader {
def apply[T <: Vehicle](vehicleType: VehicleType[T]) = vehicleType.newReader
}
This might not be the most correct terminology but what I mean by boxed type is Box[T] for type T. So Option[Int] is a boxed Int.
How might one go about extracting these types? My naive attempt:
//extractor
type X[Box[E]] = E //doesn't compile. E not found
//boxed
type boxed = Option[Int]
//unboxed
type parameter = X[boxed] //this is the syntax I would like to achieve
implicitly[parameter =:= Int] //this should compile
Is there any way to do this? Apart from the Apocalisp blog I have hard time finding instructions on type-level meta-programming in Scala.
I can only imagine two situations. Either you use type parameters, then if you use such a higher-kinded-type, e.g. as argument to a method, you will have its type parameter duplicated in the method generics:
trait Box[E]
def doSomething[X](b: Box[X]) { ... } // parameter re-stated as `X`
or you have type members, then you can refer to them per instance:
trait Box { type E }
def doSomething(b: Box) { type X = b.E }
...or generally
def doSomething(x: Box#E) { ... }
So I think you need to rewrite your question in terms of what you actually want to achieve.
trait Link[This] {
var next:This = null
}
gives "type mismatch; found: Null(null) required: This"
So presumably I need to tell the type checker that This is going to be a type that can be assigned null. How do I do this?
(If there's a site I should be reading first before asking questions like this, please point me at it. I'm currently part-way through the preprint of the 2nd Ed of Programming In Scala)
You have to constrain This to be a superclass of Null - which is the way of telling the compiler that null is a valid value for that type. (In fact, thinking about Any, AnyRef and AnyVal only muddles the problem - just ask the compiler for what you want!)
trait Link[This >: Null] {
var next:This = null
}
However, I would suggest that you avoid using null, you could use Option[This] and affect None - such a construction will allow you to use pattern matching, and is a very strong statement that clients using this field should expect it to maybe have no value.
trait Link[This] {
var next:Option[This] = None
}
trait Link {
var next:This = null
}
This should work. Is there a specific reason that you want to need to parameterize the trait with a type?
My first thought, which didn't work. I'm not sure why.
trait Link[This <: AnyRef] { // Without the type bound, it's Any
var next: This = null
}
When worse comes to worst, there's always casting:
trait Link[This <: AnyRef] {
var next: This = null.asInstanceOf[This]
}
With the cast, you no longer need the type bound for this trait to compile, although you might want it there for other reasons.
Can anyone explain the compile error below? Interestingly, if I change the return type of the get() method to String, the code compiles just fine. Note that the thenReturn method has two overloads: a unary method and a varargs method that takes at least one argument. It seems to me that if the invocation is ambiguous here, then it would always be ambiguous.
More importantly, is there any way to resolve the ambiguity?
import org.scalatest.mock.MockitoSugar
import org.mockito.Mockito._
trait Thing {
def get(): java.lang.Object
}
new MockitoSugar {
val t = mock[Thing]
when(t.get()).thenReturn("a")
}
error: ambiguous reference to overloaded definition,
both method thenReturn in trait OngoingStubbing of type
java.lang.Object,java.lang.Object*)org.mockito.stubbing.OngoingStubbing[java.lang.Object]
and method thenReturn in trait OngoingStubbing of type
(java.lang.Object)org.mockito.stubbing.OngoingStubbing[java.lang.Object]
match argument types (java.lang.String)
when(t.get()).thenReturn("a")
Well, it is ambiguous. I suppose Java semantics allow for it, and it might merit a ticket asking for Java semantics to be applied in Scala.
The source of the ambiguitity is this: a vararg parameter may receive any number of arguments, including 0. So, when you write thenReturn("a"), do you mean to call the thenReturn which receives a single argument, or do you mean to call the thenReturn that receives one object plus a vararg, passing 0 arguments to the vararg?
Now, what this kind of thing happens, Scala tries to find which method is "more specific". Anyone interested in the details should look up that in Scala's specification, but here is the explanation of what happens in this particular case:
object t {
def f(x: AnyRef) = 1 // A
def f(x: AnyRef, xs: AnyRef*) = 2 // B
}
if you call f("foo"), both A and B
are applicable. Which one is more
specific?
it is possible to call B with parameters of type (AnyRef), so A is
as specific as B.
it is possible to call A with parameters of type (AnyRef,
Seq[AnyRef]) thanks to tuple
conversion, Tuple2[AnyRef,
Seq[AnyRef]] conforms to AnyRef. So
B is as specific as A. Since both are
as specific as the other, the
reference to f is ambiguous.
As to the "tuple conversion" thing, it is one of the most obscure syntactic sugars of Scala. If you make a call f(a, b), where a and b have types A and B, and there is no f accepting (A, B) but there is an f which accepts (Tuple2(A, B)), then the parameters (a, b) will be converted into a tuple.
For example:
scala> def f(t: Tuple2[Int, Int]) = t._1 + t._2
f: (t: (Int, Int))Int
scala> f(1,2)
res0: Int = 3
Now, there is no tuple conversion going on when thenReturn("a") is called. That is not the problem. The problem is that, given that tuple conversion is possible, neither version of thenReturn is more specific, because any parameter passed to one could be passed to the other as well.
In the specific case of Mockito, it's possible to use the alternate API methods designed for use with void methods:
doReturn("a").when(t).get()
Clunky, but it'll have to do, as Martin et al don't seem likely to compromise Scala in order to support Java's varargs.
Well, I figured out how to resolve the ambiguity (seems kind of obvious in retrospect):
when(t.get()).thenReturn("a", Array[Object](): _*)
As Andreas noted, if the ambiguous method requires a null reference rather than an empty array, you can use something like
v.overloadedMethod(arg0, null.asInstanceOf[Array[Object]]: _*)
to resolve the ambiguity.
If you look at the standard library APIs you'll see this issue handled like this:
def meth(t1: Thing): OtherThing = { ... }
def meth(t1: Thing, t2: Thing, ts: Thing*): OtherThing = { ... }
By doing this, no call (with at least one Thing parameter) is ambiguous without extra fluff like Array[Thing](): _*.
I had a similar problem using Oval (oval.sf.net) trying to call it's validate()-method.
Oval defines 2 validate() methods:
public List<ConstraintViolation> validate(final Object validatedObject)
public List<ConstraintViolation> validate(final Object validatedObject, final String... profiles)
Trying this from Scala:
validator.validate(value)
produces the following compiler-error:
both method validate in class Validator of type (x$1: Any,x$2: <repeated...>[java.lang.String])java.util.List[net.sf.oval.ConstraintViolation]
and method validate in class Validator of type (x$1: Any)java.util.List[net.sf.oval.ConstraintViolation]
match argument types (T)
var violations = validator.validate(entity);
Oval needs the varargs-parameter to be null, not an empty-array, so I finally got it to work with this:
validator.validate(value, null.asInstanceOf[Array[String]]: _*)