Scalaz 7 - why using type alias results in ambigous typeclass resolution for Reader - scala

Code to test with:
import scalaz.{Reader, Applicative}
class ReaderInstanceTest {
type IntReader[A] = Reader[Int, A]
val a = Applicative[({type l[A] = Reader[Int, A]})#l] // fine
val b = Applicative[IntReader]
// ^ ambigous implicit values
// both method kleisliMonadReader ..
// and method kleisliIdMonadReader ..
}
Is this related to Scala's higher-order unification for type constructor inference ticket? If so (and even if not), could you describe what happens here in the a and b cases?
Do you have guidelines about when to use type lambda and when to use a type alias so that everything works out on the long run without unexpected errors?

Yes, this is related to SI-2712.
kleisliIdMonadReader exists solely to guide type inference; it just forwards to kleisliMonadReader. By providing the type alias IntReader, scalac doesn't need this assistance, and can infer the type arguments for kleisliMonadReader directly. This leads to the ambiguity.
I've just committed a remedy: we can prioritize these implicits relative to each other, by defining one in a subclass.
https://github.com/scalaz/scalaz/commit/6f9ae5f

Related

In Scala, how to circumvent 'inferred type arguments do not conform' error?

I have a reflective function with implicit TypeTag parameter:
def fromOptionFn[R: TypeTag](self: Int => Option[R]): Wrapper[R] = {
println(TypeTag[R])
...
}
Which for unknown reason doesn't work (see How to make Scala type inference powerful enough to discover generic type parameter?):
> fromOptionFn2(v => Some(" + _))
> typeTag(Any)
I speculate that its caused by inferring R from Option[R], so I improve it a bit:
def fromOptionFn[R, Opt <: Option[R]: TypeTag](self: Int => Opt): Wrapper[R] = {
println(typeTag[Opt])
...
}
This time its worse, doesn't even compile, the error clearly inferred that scala is not smart enough to analyse the type:
> fromOptionFn2(v => Some(" + _))
Error: inferred type arguments [Nothing,Option[String]] do not conform to method fromOptionFn's type parameter bounds [R,Opt <: Option[R]]
So how do I temporarily circumvent this compilation problem? (Of course I can report it on Lightbend issue tracker but its too slow)
ADDENDUM: This problem itself is an attempted circumvention for How to make Scala type inference powerful enough to discover generic type parameter?, which might won't be fixed. In my case I don't mind getting the TypeTag of type R or Option[R], whatever works works.
This isn't an improvement, just the opposite, and Scala type inference simply doesn't support inferring Opt first and getting R from there: instead it infers Nothing because R isn't a part of any parameter types (and return type is unknown).
You could circumvent it by specifying the type parameters explicitly on every call: fromOptionFn2[String, Option[String]](...). Giving the expected type should also work in this specific case, I think: fromOptionFn2(...): Wrapper[String]. However, a better idea would be not to use type parameter signatures like [R, Opt <: Option[R]] in the first place.

Scala : Does variable type inference affect performance?

In Scala, you can declare a variable by specifying the type, like this: (method 1)
var x : String = "Hello World"
or you can let Scala automatically detect the variable type (method 2)
var x = "Hello World"
Why would you use method 1? Does it have a performance benefit?
And once the variable has been declared, will it behave exactly the same in all situations wether it has been declared by method 1 or method 2?
Type inference is done at compile time - it's essentially the compiler figuring out what you mean, filling in the blanks, and then compiling the resulting code.
What this means is that there can be no runtime cost to type inference. The compile time cost, however, can sometimes be prohibitive and require you to explicitly annotate some of your expressions.
You will not have any performance difference using this two variants.
They will both be compiled to the same code.
The other answers assume that the compiler inferred what you think it inferred.
It is easy to demonstrate that specifying the type in a definition will set the expected type for the RHS of the definition and guide type inference.
For example, in this method that builds a collection of something, A is inferred to be Nothing, which may not be what you wanted:
scala> def build[A, B, C <: Iterable[B]](bs: B*)(implicit cbf: CanBuildFrom[A, B, C]): C = {
| val b = cbf(); println(b.getClass); b ++= bs; b.result }
build: [A, B, C <: Iterable[B]](bs: B*)(implicit cbf: scala.collection.generic.CanBuildFrom[A,B,C])C
scala> val xs = build(1,2,3)
class scala.collection.immutable.VectorBuilder
xs: scala.collection.immutable.IndexedSeq[Int] = Vector(1, 2, 3)
scala> val xs: List[Int] = build(1,2,3)
class scala.collection.mutable.ListBuffer
xs: List[Int] = List(1, 2, 3)
scala> val xs: Seq[Int] = build(1,2,3)
class scala.collection.immutable.VectorBuilder
xs: Seq[Int] = Vector(1, 2, 3)
Obviously, it matters for runtime performance whether you get a List or a Vector.
This is a lame example, but in many expressions you wouldn't notice the type of an intermediate collection unless it caused a performance problem.
Sample conversations:
https://groups.google.com/forum/#!msg/scala-language/mQ-bIXbC1zs/wgSD4Up5gYMJ
http://grokbase.com/p/gg/scala-user/137mgpjg98/another-funny-quirk
Why is Seq.newBuilder returning a ListBuffer?
https://groups.google.com/forum/#!topic/scala-user/1SjYq_qFuKk
In the simple example you gave, there is no difference in the generated byte code, and therefore no difference in performance. It would also make no noticeable difference in compilation speed.
In more complex code (likely involving implicits) you could run into cases where compile-type performance would be noticeably improved by specifying some types. However, I would completely ignore this until and unless you run into it -- specify types or not for other, better reasons.
More in line with your question, there is one very important case where it is a good idea to specify the type to ensure good run-time performance. Consider this code:
val x = new AnyRef { def sayHi() = println("Howdy!") }
x.sayHi
That code uses reflection to call sayHi, and that's a huge performance hit. Recent versions of Scala will warn you about this code for that reason, unless you have enabled the language feature for it:
warning: reflective access of structural type member method sayHi should be enabled
by making the implicit value scala.language.reflectiveCalls visible.
This can be achieved by adding the import clause 'import scala.language.reflectiveCalls'
or by setting the compiler option -language:reflectiveCalls.
See the Scala docs for value scala.language.reflectiveCalls for a discussion
why the feature should be explicitly enabled.
You might then change the code to this, which does not make use of reflection:
trait Talkative extends AnyRef { def sayHi(): Unit }
val x = new Talkative { def sayHi() = println("Howdy!") }
x.sayHi
For this reason you generally want to specify the type of the variable when you are defining classes this way; that way if you inadvertently add a method that would require reflection to call, you'll get a compilation error -- the method won't be defined for the variable's type. So while it is not the case that specifying the type makes the code run faster, it is the case that if the code would be slow, specifying the type makes it fail to compile.
val x: AnyRef = new AnyRef { def sayHi() = println("Howdy!") }
x.sayHi // ERROR: sayHi is not defined on AnyRef
There are of course other reasons why you might want to specify a type. They are required for the formal parameters of methods/functions, and for the return types of methods that are recursive or overloaded.
Also, you should always specify return types for methods in a public API (unless they are just trivially obvious), or you might end up with different method signatures than you intended, and then risk breaking existing clients of your API when you fix the signature.
You may of course want to deliberately widen a type so that you can assign other types of things to a variable later, e.g.
var shape: Shape = new Circle(1.0)
shape = new Square(1.0)
But in these cases there is no performance impact.
It is also possible that specifying a type will cause a conversion, and of course that will have whatever performance impact the conversion imposes.

Manifest and abstract type resolution

I am hitting a compiler problem when the compiler needs to solve a manifest for a class with an abstract type parameter. The following snippet show the issue
trait MyStuff
trait SecurityMutatorFactory[X]{
def apply(x1:X,x2:X)
}
object Example{
trait LEdge[N]
{
type L1
}
type MyEdge[X] = LEdge[X] { type L1 = SecurityMutatorFactory[X]}
val a:Manifest[MyEdge[MyStuff]] = implicitly[Manifest[MyEdge[MyStuff]]]
}
As a result, the compiler throws the following type error:
type mismatch;
found : scala.reflect.Manifest[LEdge[MyStuff]]
required: Manifest[MyEdge[MyStuff]]
Note: LEdge[MyStuff] >: MyEdge[MyStuff], but trait Manifest is invariant in type T.
You may wish to investigate a wildcard type such as `_ >: MyEdge[MyStuff]`. (SLS 3.2.10)
val a:Manifest[MyEdge[MyStuff]] = implicitly[Manifest[MyEdge[MyStuff]]]
What is happening at compiler level? ^
As others have suggested the problem comes from
type MyEdge[X] = LEdge[X] { type L1 = SecurityMutatorFactory[X] }
Declarations of the form type F[X] = ... introduce type synonyms, ie new names for existing types. They do not construct new traits or classes. However, LEdge[X] { type L1 = SecurityMutatorFactory[X] } is constructing a new anonymous class. So your example is approximatelly equivalent to
trait MyEdge[X] extends LEdge[X] { type L1 = SecurityMutatorFactory[X] }
(which is what you most probably want) but the original definition in the example is defining a synonym for an anonymous class instead of defining a new class MyEdge[X]. So in the example the new class is not actually called MyEdge. When constructing the implicit manifest, the compiler replaces the type synonym with the underlying type, but fails to construct a manifest for that because that type is anonymous.
Replacing the MyEdge declaration with either a normal extension definition:
trait MyEdge[X] extends LEdge[X] { type L1 = SecurityMutatorFactory[X] }
or with an ordinary type synonym:
type MyEdge[X] = LEdge[X]
both compile successfully.
EDIT
Here is the specific reason why generating implicit manifests for anonymous classes fails.
In the language specification type expessions of the form BaseType { ... } are called refined types.
According to the language specification, the manifest for a refined type is just the manifest of its base class. This however fails to typecheck, because you asked for a Manifest[LEdge[MyStuff]{ type L1 = SecurityMutatorFactory[X] }], but the algorithm is returning Manifest[LEdge[MyStuff]]. This means that you can only construct implicit manifests for types with refined types only in contravariant positions. For example using:
type MyEdge[X] = LEdge[X] { type L1 = SecurityMutatorFactory[X] } => AnyRef
in your example allows it to compile, though it is clearly not what you are after.
The full algorithm for constructing implicit manifests is given at the end of section 7.5 of the language specification. This question is covered by clause 6:
6) If T is a refined type T'{R}, a manifest is generated for T'. (That is, refinements are never reflected in manifests).
Well, I'm not so familiar with that kind of pattern:
type MyEdge[X] = LEdge[X] { type L1 = SecurityMutatorFactory[X]}
but I tend to consider types defined with the type keyword as aliases (concepts) rather than a guarantee about the implementation (EDIT more precisely, I believe that type provides guarantees in terms of prototyping/specifying but that no AST/code is generated until there's an actual need to replace the alias with the traits/classes it's based upon). So even if the compiler claims, in its error message:
LEdge[MyStuff] >: MyEdge[MyStuff]
I'm not sure that, at the bytecode level, it implements MyEdge accordingly, with interfaces/methods/etc. Thus, it might not recognize the wanted relationship between LEdge and MyEdge, eventually:
found : scala.reflect.Manifest[LEdge[MyStuff]]
required: Manifest[MyEdge[MyStuff]]
(and, is the absence of package scala.reflect. a hint? (1))
About your code, how do you use a? Anyway, if the following is your intent, with:
trait MyEdge[X] extends LEdge[X] {
type L1 = SecurityMutatorFactory[X]
}
instead, it does compile (scala 2.10)... (EDIT I just noticed now that dmitry already told that) ...what that does during runtime, I don't know!
As an item of note, Manifest is deprecated after scala 2.9; so you may prefer to use TypeTag[T] as described in the scaladoc.
EDIT:
(1) I suspect that the following happens:
- at the syntactic analysis phase, the compiler registers literally what you specified, that is, the implicitly method shall return a Manifest[MyEdge[MyStuff]].
- by the code generation phase, the aliases are "reconciled" to their nearest classes or traits; in the case of implicitly the result's type Manifest[MyEdge[MyStuff]] becomes trait scala.reflect.Manifest[LEdge[MyStuff]]]
- due to some limitations of type inference involved in Manifest and type "aliasing" within type parameters, however, somehow the specified requirement Manifest[MyEdge[MyStuff]] remains under its raw shape
- (this is pure conjecture, because I've not read the Scala compiler source code for this answer) the compiler would have the proper AST/code on the one hand, but a method prototype/spec that is still under its raw/literal shape on the other hand; that doesn't fit in so it emits an error.
Hoping that helps...

Scala ClassManifest instead of Type[T]

The following code
def httpPost[T: ClassManifest](data: AnyRef): T = {
val webResource = client.resource("http://localhost..")
val resp = webResource.post(classOf[ClientResponse], data)
resp.getEntity(classManifest[T].erasure) //Need classOf[T] here
}
results in this type mismatch compilation error
[INFO] found : _$1 where type _$1
[INFO] required: T
[INFO] resp.getEntity(classManifest[T].erasure)
Based on the answer to Scala classOf for type parameter it looks like it should work.
The erasure method returns java.lang.Class[_] and I presume that this is the problem so I have two questions:
Why does the class manifest return an existential type and not simply Class[T] - if it's the erasure of T, surely that will always be _ (underscore) because T is obviously unknown, which means its return value isn't as useful as I'd have expected.
What do I need to do to make the code work!
Update:
Thanks Kim and Jean-Phillipe for your answers.
I had previously tried a cast so the original last line was replaced with
val responseData = resp.getEntity(classManifest[T].erasure) //Runtime error
responseData.asInstanceOf[T]
and this compiles but there's now a runtime error because the getEntity method is passed the class of Object, which it can't process because it needs a more specific type (for which it has a handler). Although it's deferred until runtime, it again comes down to the erasure method not giving specific type information and that's why I thought that to solve the problem, the inline example must be solved.
There's something seriously wrong with this code. In particular:
def httpPost[T: ClassManifest](data: AnyRef): T = {
val webResource = client.resource("http://localhost..")
val resp = webResource.post(classOf[ClientResponse], data)
resp.getEntity(classManifest[T].erasure) //Need classOf[T] here
}
How is Scala supposed to know what the type of T is? Are you passing it explicitly when invoking httpPost? I suspect not, and that's the reason why erasure is returning Object for you.
As for why ClassManifest#erasure returns Class[_] instead of something else, I suspect the reason is that this is the type used by most Java methods, and since Class is invariant, if erasure returned Class[T], then you'd have to cast it to use it with those methods!
First question: No idea...
Second question: I think it is safe to cast here. You can use foo.asInstanceOf[Class[T]].
I believe that an existential type is returned to make it clear that the cast that you may want to make is your responsibility. Class is a bit weird: for instance, a Class[List[String]] should actually be typed as a Class[List[_]] as it does not carry any information about the String parametrization of List. The cast suggested by Kim is always safe when T is not itself a parametrized type.

Spurious ambiguous reference error in Scala 2.7.7 compiler/interpreter?

Can anyone explain the compile error below? Interestingly, if I change the return type of the get() method to String, the code compiles just fine. Note that the thenReturn method has two overloads: a unary method and a varargs method that takes at least one argument. It seems to me that if the invocation is ambiguous here, then it would always be ambiguous.
More importantly, is there any way to resolve the ambiguity?
import org.scalatest.mock.MockitoSugar
import org.mockito.Mockito._
trait Thing {
def get(): java.lang.Object
}
new MockitoSugar {
val t = mock[Thing]
when(t.get()).thenReturn("a")
}
error: ambiguous reference to overloaded definition,
both method thenReturn in trait OngoingStubbing of type
java.lang.Object,java.lang.Object*)org.mockito.stubbing.OngoingStubbing[java.lang.Object]
and method thenReturn in trait OngoingStubbing of type
(java.lang.Object)org.mockito.stubbing.OngoingStubbing[java.lang.Object]
match argument types (java.lang.String)
when(t.get()).thenReturn("a")
Well, it is ambiguous. I suppose Java semantics allow for it, and it might merit a ticket asking for Java semantics to be applied in Scala.
The source of the ambiguitity is this: a vararg parameter may receive any number of arguments, including 0. So, when you write thenReturn("a"), do you mean to call the thenReturn which receives a single argument, or do you mean to call the thenReturn that receives one object plus a vararg, passing 0 arguments to the vararg?
Now, what this kind of thing happens, Scala tries to find which method is "more specific". Anyone interested in the details should look up that in Scala's specification, but here is the explanation of what happens in this particular case:
object t {
def f(x: AnyRef) = 1 // A
def f(x: AnyRef, xs: AnyRef*) = 2 // B
}
if you call f("foo"), both A and B
are applicable. Which one is more
specific?
it is possible to call B with parameters of type (AnyRef), so A is
as specific as B.
it is possible to call A with parameters of type (AnyRef,
Seq[AnyRef]) thanks to tuple
conversion, Tuple2[AnyRef,
Seq[AnyRef]] conforms to AnyRef. So
B is as specific as A. Since both are
as specific as the other, the
reference to f is ambiguous.
As to the "tuple conversion" thing, it is one of the most obscure syntactic sugars of Scala. If you make a call f(a, b), where a and b have types A and B, and there is no f accepting (A, B) but there is an f which accepts (Tuple2(A, B)), then the parameters (a, b) will be converted into a tuple.
For example:
scala> def f(t: Tuple2[Int, Int]) = t._1 + t._2
f: (t: (Int, Int))Int
scala> f(1,2)
res0: Int = 3
Now, there is no tuple conversion going on when thenReturn("a") is called. That is not the problem. The problem is that, given that tuple conversion is possible, neither version of thenReturn is more specific, because any parameter passed to one could be passed to the other as well.
In the specific case of Mockito, it's possible to use the alternate API methods designed for use with void methods:
doReturn("a").when(t).get()
Clunky, but it'll have to do, as Martin et al don't seem likely to compromise Scala in order to support Java's varargs.
Well, I figured out how to resolve the ambiguity (seems kind of obvious in retrospect):
when(t.get()).thenReturn("a", Array[Object](): _*)
As Andreas noted, if the ambiguous method requires a null reference rather than an empty array, you can use something like
v.overloadedMethod(arg0, null.asInstanceOf[Array[Object]]: _*)
to resolve the ambiguity.
If you look at the standard library APIs you'll see this issue handled like this:
def meth(t1: Thing): OtherThing = { ... }
def meth(t1: Thing, t2: Thing, ts: Thing*): OtherThing = { ... }
By doing this, no call (with at least one Thing parameter) is ambiguous without extra fluff like Array[Thing](): _*.
I had a similar problem using Oval (oval.sf.net) trying to call it's validate()-method.
Oval defines 2 validate() methods:
public List<ConstraintViolation> validate(final Object validatedObject)
public List<ConstraintViolation> validate(final Object validatedObject, final String... profiles)
Trying this from Scala:
validator.validate(value)
produces the following compiler-error:
both method validate in class Validator of type (x$1: Any,x$2: <repeated...>[java.lang.String])java.util.List[net.sf.oval.ConstraintViolation]
and method validate in class Validator of type (x$1: Any)java.util.List[net.sf.oval.ConstraintViolation]
match argument types (T)
var violations = validator.validate(entity);
Oval needs the varargs-parameter to be null, not an empty-array, so I finally got it to work with this:
validator.validate(value, null.asInstanceOf[Array[String]]: _*)