Spurious ambiguous reference error in Scala 2.7.7 compiler/interpreter? - scala

Can anyone explain the compile error below? Interestingly, if I change the return type of the get() method to String, the code compiles just fine. Note that the thenReturn method has two overloads: a unary method and a varargs method that takes at least one argument. It seems to me that if the invocation is ambiguous here, then it would always be ambiguous.
More importantly, is there any way to resolve the ambiguity?
import org.scalatest.mock.MockitoSugar
import org.mockito.Mockito._
trait Thing {
def get(): java.lang.Object
}
new MockitoSugar {
val t = mock[Thing]
when(t.get()).thenReturn("a")
}
error: ambiguous reference to overloaded definition,
both method thenReturn in trait OngoingStubbing of type
java.lang.Object,java.lang.Object*)org.mockito.stubbing.OngoingStubbing[java.lang.Object]
and method thenReturn in trait OngoingStubbing of type
(java.lang.Object)org.mockito.stubbing.OngoingStubbing[java.lang.Object]
match argument types (java.lang.String)
when(t.get()).thenReturn("a")

Well, it is ambiguous. I suppose Java semantics allow for it, and it might merit a ticket asking for Java semantics to be applied in Scala.
The source of the ambiguitity is this: a vararg parameter may receive any number of arguments, including 0. So, when you write thenReturn("a"), do you mean to call the thenReturn which receives a single argument, or do you mean to call the thenReturn that receives one object plus a vararg, passing 0 arguments to the vararg?
Now, what this kind of thing happens, Scala tries to find which method is "more specific". Anyone interested in the details should look up that in Scala's specification, but here is the explanation of what happens in this particular case:
object t {
def f(x: AnyRef) = 1 // A
def f(x: AnyRef, xs: AnyRef*) = 2 // B
}
if you call f("foo"), both A and B
are applicable. Which one is more
specific?
it is possible to call B with parameters of type (AnyRef), so A is
as specific as B.
it is possible to call A with parameters of type (AnyRef,
Seq[AnyRef]) thanks to tuple
conversion, Tuple2[AnyRef,
Seq[AnyRef]] conforms to AnyRef. So
B is as specific as A. Since both are
as specific as the other, the
reference to f is ambiguous.
As to the "tuple conversion" thing, it is one of the most obscure syntactic sugars of Scala. If you make a call f(a, b), where a and b have types A and B, and there is no f accepting (A, B) but there is an f which accepts (Tuple2(A, B)), then the parameters (a, b) will be converted into a tuple.
For example:
scala> def f(t: Tuple2[Int, Int]) = t._1 + t._2
f: (t: (Int, Int))Int
scala> f(1,2)
res0: Int = 3
Now, there is no tuple conversion going on when thenReturn("a") is called. That is not the problem. The problem is that, given that tuple conversion is possible, neither version of thenReturn is more specific, because any parameter passed to one could be passed to the other as well.

In the specific case of Mockito, it's possible to use the alternate API methods designed for use with void methods:
doReturn("a").when(t).get()
Clunky, but it'll have to do, as Martin et al don't seem likely to compromise Scala in order to support Java's varargs.

Well, I figured out how to resolve the ambiguity (seems kind of obvious in retrospect):
when(t.get()).thenReturn("a", Array[Object](): _*)
As Andreas noted, if the ambiguous method requires a null reference rather than an empty array, you can use something like
v.overloadedMethod(arg0, null.asInstanceOf[Array[Object]]: _*)
to resolve the ambiguity.

If you look at the standard library APIs you'll see this issue handled like this:
def meth(t1: Thing): OtherThing = { ... }
def meth(t1: Thing, t2: Thing, ts: Thing*): OtherThing = { ... }
By doing this, no call (with at least one Thing parameter) is ambiguous without extra fluff like Array[Thing](): _*.

I had a similar problem using Oval (oval.sf.net) trying to call it's validate()-method.
Oval defines 2 validate() methods:
public List<ConstraintViolation> validate(final Object validatedObject)
public List<ConstraintViolation> validate(final Object validatedObject, final String... profiles)
Trying this from Scala:
validator.validate(value)
produces the following compiler-error:
both method validate in class Validator of type (x$1: Any,x$2: <repeated...>[java.lang.String])java.util.List[net.sf.oval.ConstraintViolation]
and method validate in class Validator of type (x$1: Any)java.util.List[net.sf.oval.ConstraintViolation]
match argument types (T)
var violations = validator.validate(entity);
Oval needs the varargs-parameter to be null, not an empty-array, so I finally got it to work with this:
validator.validate(value, null.asInstanceOf[Array[String]]: _*)

Related

Compose function with ONLY implicit parameter argument

Method with ONLY implicit parameter
scala> def test1 (implicit i:Int )= Option(i)
test1: (implicit i: Int)Option[Int]
In trying to convert test1 into a function as shown below throws following error. I must be missing something obvious here?
scala> def test2(implicit i:Int) = test1 _
<console>:8: error: type mismatch;
found : Option[Int]
required: ? => ?
def test2(implicit i:Int) = test1 _
When using implicit parameter you have one in the scope. Like this:
object test{
implicit var i = 10
def fun(implicit i: Int) = println(i)
}
test.fun
And if you want to make an option isntance of something you should use Some()
Because you have an implicit Int in scope when defining test2, it's applied to test1, so you really end up with test1(i) _, which makes no sense to the compiler.
If it did compile, test2 would return the same function corresponding to test1 independent of the argument. If that's what you actually want, you can fix it by being more explicit: test1(_: Int) or { x: Int => test1(x) } instead of test1 _.
There are several different possible answers here depending on what it is exactly you are trying to achieve. If you meant test2 to be a behaviorally identical method to test1 then there is no need to use the underscore (and indeed you cannot!).
def test3(implicit i: Int) = test1(i)
If you meant test2 to be a behaviorally identical function to test1, then you need to do the following:
val test4 = test1(_: Int)
Note that I am using a val instead of a def here, because when using the underscore, I am turning test1 which was originally a method into an instance of a class (specifically one which extends Function2, i.e. a function). Also note that test4 no longer has implicit parameters as it is a function (and to the best of my knowledge functions cannot have implicit parameters) whereas test1 was a method.
Why you need to do this as opposed to just test1 _ turns out to be pretty complex...
TLDR: Scala is finnicky about the difference between foo(_) and foo _, methods are different from functions, and implicit resolution takes higher precedence over eta-expansion
Turning a method into a function object by way of an anonymous function in Scala is known as "eta-expansion." In general eta-expansion comes from the lambda calculus and refers to turning a term into its equivalent anonymous function (e.g. thing becomes x => thing(x)) or in the language of the lambda calculus, adding an abstraction over a term.
Note that without eta-expansion, programming in Scala would be absolutely torturous because methods are not functions. Therefore they cannot be directly passed as arguments to other methods or other functions. Scala tries to convert methods into functions via eta-expansion whenever possible, so as to give the appearance that you can pass methods to other methods.
So why doesn't the following work?
val test5 = test1 _ // error: could not find implicit value for parameter i: Int
Well let's see what would happen if we tried test4 without a type signature.
val test6 = test1(_) // error: missing parameter type for expanded function ((x$1) => test1(x$1))
Ah ha, different errors!
This is because test1(_) and test1 _ are subtly different things. test1(_) is partially applying the method test1 while test1 _ is a direction to perform eta-expansion on test1 until it is fully applied.
For example,
math.pow(_) // error: not enough arguments for method pow: (x: Double, y: Double)Double.
math.pow _ // this is fine
But what about the case where we have just one argument? What's the difference between partial application and eta-expansion of just one abstraction? Not really all that much. One way to view it is that eta-expansion is one way of implementing partial application. Indeed the Scala Spec seems to be silent on the difference between foo(_) and foo _ for a single parameter method. The only difference that matters for us is that the expansion that occurs in foo(_) always "binds" tighter than foo _, indeed it seems to bind as closely as method application does.
That's why test1(_) gives us an error about types. Partial application is treated analogously to normal method application. Because normal method application must always take place before implicit resolution, otherwise we could never replace an implicit value with our own value, partial application takes place before implicit resolution. It just so happens that the way Scala implements partial application is via eta-expansion and therefore we get the expansion into the anonymous function which then complains about the lack of a type for its argument.
My reading of Section 6.26 of the Scala Spec suggests that there is an ordering to the resolution of various conversions. In particular it seems to list resolving implicits as coming before eta-expansion. Indeed, for fully-applied eta-expansion, it would seem that this would necessarily be the case since Scala functions cannot have implicit parameters (only its methods can).
Hence, in the case of test5 as #Alexey says, when we are explicitly telling Scala to eta-expand test1 with test1 _, the implicit resolution takes place first, and then eta-expansion tries to take place, which fails because after implicit resolution Scala's typechecker realizes we have an Option not a method.
So that's why we need test1(_) over test1 _. The final type annotation test1(_: Int) is needed because Scala's type inference isn't robust enough to determine that after eta-expanding test1, the only possible type you can give to the anonymous function is the same as the type signature for the method. In fact, if we gave the type system enough hints, we can get away with test1(_).
val test7: Int => Option[Int] = test1(_) // this works
val test8: Int => Option[Int] = test1 _ // this still doesn't

Demystifying a function definition

I am new to Scala, and I hope this question is not too basic. I couldn't find the answer to this question on the web (which might be because I don't know the relevant keywords).
I am trying to understand the following definition:
def functionName[T <: AnyRef](name: Symbol)(range: String*)(f: T => String)(implicit tag: ClassTag[T]): DiscreteAttribute[T] = {
val r = ....
new anotherFunctionName[T](name.toString, f, Some(r))
}
First , why is it defined as def functionName[...](...)(...)(...)(...)? Can't we define it as def functionName[...](..., ..., ..., ...)?
Second, how does range: String* from range: String?
Third, would it be a problem if implicit tag: ClassTag[T] did not exist?
First , why is it defined as def functionName...(...)(...)(...)? Can't we define it as def functionName[...](..., ..., ..., ...)?
One good reason to use currying is to support type inference. Consider these two functions:
def pred1[A](x: A, f: A => Boolean): Boolean = f(x)
def pred2[A](x: A)(f: A => Boolean): Boolean = f(x)
Since type information flows from left to right if you try to call pred1 like this:
pred1(1, x => x > 0)
type of the x => x > 0 cannot be determined yet and you'll get an error:
<console>:22: error: missing parameter type
pred1(1, x => x > 0)
^
To make it work you have to specify argument type of the anonymous function:
pred1(1, (x: Int) => x > 0)
pred2 from the other hand can be used without specifying argument type:
pred2(1)(x => x > 0)
or simply:
pred2(1)(_ > 0)
Second, how does range: String* from range: String?
It is a syntax for defining Repeated Parameters a.k.a varargs. Ignoring other differences it can be used only on the last position and is available as a scala.Seq (here scala.Seq[String]). Typical usage is apply method of the collections types which allows for syntax like SomeDummyCollection(1, 2, 3). For more see:
What does `:_*` (colon underscore star) do in Scala?
Scala variadic functions and Seq
Is there a difference in Scala between Seq[T] and T*?
Third, would it be a problem if implicit tag: ClassTag[T] did not exist?
As already stated by Aivean it shouldn't be the case here. ClassTags are automatically generated by the compiler and should be accessible as long as the class exists. In general case if implicit argument cannot be accessed you'll get an error:
scala> import scala.concurrent._
import scala.concurrent._
scala> val answer: Future[Int] = Future(42)
<console>:13: error: Cannot find an implicit ExecutionContext. You might pass
an (implicit ec: ExecutionContext) parameter to your method
or import scala.concurrent.ExecutionContext.Implicits.global.
val answer: Future[Int] = Future(42)
Multiple argument lists: this is called "currying", and enables you to call a function with only some of the arguments, yielding a function that takes the rest of the arguments and produces the result type (partial function application). Here is a link to Scala documentation that gives an example of using this. Further, any implicit arguments to a function must be specified together in one argument list, coming after any other argument lists. While defining functions this way is not necessary (apart from any implicit arguments), this style of function definition can sometimes make it clearer how the function is expected to be used, and/or make the syntax for partial application look more natural (f(x) rather than f(x, _)).
Arguments with an asterisk: "varargs". This syntax denotes that rather than a single argument being expected, a variable number of arguments can be passed in, which will be handled as (in this case) a Seq[String]. It is the equivalent of specifying (String... range) in Java.
the implicit ClassTag: this is often needed to ensure proper typing of the function result, where the type (T here) cannot be determined at compile time. Since Scala runs on the JVM, which does not retain type information beyond compile time, this is a work-around used in Scala to ensure information about the type(s) involved is still available at runtime.
Check currying:Methods may define multiple parameter lists. When a method is called with a fewer number of parameter lists, then this will yield a function taking the missing parameter lists as its arguments.
range:String* is the syntax for varargs
implicit TypeTag parameter in Scala is the alternative for Class<T> clazzparameter in Java. It will be always available if your class is defined in scope. Read more about type tags.

Can structural typing work with generics?

I have an interface defined using a structural type like this:
trait Foo {
def collection: {
def apply(a: Int) : String
def values() : collection.Iterable[String]
}
}
}
I wanted to have one of the implementers of this interface do so using a standard mutable HashMap:
class Bar {
val collection: HashMap[Int, String] = HashMap[Int, String]()
}
It compiles, but at runtime I get a NoSuchMethod exception when referring a Bar instance through a Foo typed variable. Dumping out the object's methods via reflection I see that the HashMap's apply method takes an Object due to type erasure, and there's some crazily renamed generated apply method that does take an int. Is there a way to make generics work with structural types? Note in this particular case I was able to solve my problem using an actual trait instead of a structural type and that is overall much cleaner.
Short answer is that the apply method parameter is causing you grief because it requires some implicit conversions of the parameter (Int => Integer). Implicits are resolved at compile time, the NoSuchMethodException is likely a result of these missing implicits.
Attempt to use the values method and it should work since there are no implicits being used.
I've attempted to find a way to make this example work but have had no success so far.

Could not find implicit value for evidence parameter of type scala.reflect.ClassManifest[T]

It seems I don't understand something important, maybe about erasure (damn it).
I have a method, which I wanted to create array of size n filled with values from gen:
def testArray[T](n: Int, gen: =>T) {
val arr = Array.fill(n)(gen)
...
}
And use it, for example as:
testArray(10, util.Random.nextInt(10))
But I get error:
scala: could not find implicit value for evidence parameter of type scala.reflect.ClassManifest[T]
val arr = Array.fill(n)(gen)
^
Please, explain what I did wrong, why this error, and what kind of code it makes impossible?
That is because in testArray the concrete type of T is not known at compile time. Your signature has to look like def testArray[T : ClassManifest](n: Int, gen: =>T), this will add an implicit parameter of type ClassManifest[T] to your method, that is automatically passed to the call of testArray and then further passed to the Array.fill call. This is called a context bound.
The Array.fill method has the following signature:
def fill[T](n: Int)(elem: => T)(implicit arg0: ClassManifest[T]): Array[T]
In order to get an instance of ClassManifest[T] you need to know the concrete type. A ClassManifest can be obtained like this:
implicitly[ClassManifest[String]]
A ClassManifest is implicitly available for every concrete type.
For any implicit error, you can add the implicits you require to the method with the type parameter:
def wrap[T](n:Int)(elem: => T)(implicit c:ClassManifest[T], o:Ordering[T])
If you did not yourself introduce ClassManifest or Ordering, the writers of the library have (most likely) provided sensible defaults for you.
If you would call the wrap method:
wrap(2)(3)
It's expanded like this:
wrap[Int](2)(3)(implicitly[ClassManifest[Int]], implicitly[Ordering[Int]])
If you introduced a custom class Person here, you would get an error for not finding an implicit instance of Ordering[Person]. The writers of the library could not have known how to order Person. You could solve that like this:
class Person
implicit val o = new Ordering[Person] { // implement required methods }
wrap(2)(new Person)
The Scala compiler looks in different scopes for implicits, an Ordering would usually not be specified like this. I suggest you look up implicit resolution on the internet to learn more about it.

Map from Class[T] to T without casting

I want to map from class tokens to instances along the lines of the following code:
trait Instances {
def put[T](key: Class[T], value: T)
def get[T](key: Class[T]): T
}
Can this be done without having to resolve to casts in the get method?
Update:
How could this be done for the more general case with some Foo[T] instead of Class[T]?
You can try retrieving the object from your map as an Any, then using your Class[T] to “cast reflectively”:
trait Instances {
private val map = collection.mutable.Map[Class[_], Any]()
def put[T](key: Class[T], value: T) { map += (key -> value) }
def get[T](key: Class[T]): T = key.cast(map(key))
}
With help of a friend of mine, we defined the map with keys as Manifest instead of Class which gives a better api when calling.
I didnt get your updated question about "general case with some Foo[T] instead of Class[T]". But this should work for the cases you specified.
object Instances {
private val map = collection.mutable.Map[Manifest[_], Any]()
def put[T: Manifest](value: T) = map += manifest[T] -> value
def get[T: Manifest]: T = map(manifest[T]).asInstanceOf[T]
def main (args: Array[String] ) {
put(1)
put("2")
println(get[Int])
println(get[String])
}
}
If you want to do this without any casting (even within get) then you will need to write a heterogeneous map. For reasons that should be obvious, this is tricky. :-) The easiest way would probably be to use a HList-like structure and build a find function. However, that's not trivial since you need to define some way of checking type equality for two arbitrary types.
I attempted to get a little tricky with tuples and existential types. However, Scala doesn't provide a unification mechanism (pattern matching doesn't work). Also, subtyping ties the whole thing in knots and basically eliminates any sort of safety it might have provided:
val xs: List[(Class[A], A) forSome { type A }] = List(
classOf[String] -> "foo", classOf[Int] -> 42)
val search = classOf[String]
val finalResult = xs collect { case (`search`, result) => result } headOption
In this example, finalResult will be of type Any. This is actually rightly so, since subtyping means that we don't really know anything about A. It's not why the compiler is choosing that type, but it is a correct choice. Take for example:
val xs: List[(Class[A], A) forSome { type A }] = List(classOf[Boolean] -> 'bippy)
This is totally legal! Subtyping means that A in this case will be chosen as Any. It's hardly what we want, but it is what you will get. Thus, in order to express this constraint without tracking all of the types individual (using a HMap), Scala would need to be able to express the constraint that a type is a specific type and nothing else. Unfortunately, Scala does not have this ability, and so we're basically stuck on the generic constraint front.
Update Actually, it's not legal. Just tried it and the compiler kicked it out. I think that only worked because Class is invariant in its type parameter. So, if Foo is a definite type that is invariant, you should be safe from this case. It still doesn't solve the unification problem, but at least it's sound. Unfortunately, type constructors are assumed to be in a magical super-position between co-, contra- and invariance, so if it's truly an arbitrary type Foo of kind * => *, then you're still sunk on the existential front.
In summary: it should be possible, but only if you fully encode Instances as a HMap. Personally, I would just cast inside get. Much simpler!