When can parentheses be safely omitted in Scala? - scala

Here is a toy example:
object Example {
import collection.mutable
import collection.immutable.TreeSet
val x = TreeSet(1, 5, 8, 12)
val y = mutable.Set.empty ++= x
val z = TreeSet.empty ++ y
// This gives an error: unspecified parameter
// val z = TreeSet.empty() ++ y
}
Apparently TreeSet.empty and TreeSet.empty() are not the same thing. What's going on under the hood? When can I safely omit (or not omit in this case) the parentheses?
Update
I have sent some code to the console and then deleted it in intellij before eval the above code, here it is:
implicit object StringOrdering extends Ordering[String] {
def compare(o1: String, o2: String) = {
o1.length - o2.length
}
}
object StringOrdering1 extends Ordering[String] {
def compare(o1: String, o2: String) = {
o2.length - o1.length
}
}

This is a special case, and isn't quite relevant to when you can and cannot omit parentheses.
This is the signature for TreeSet.empty:
def empty[A](implicit ordering: Ordering[A]): TreeSet[A]
It has an implicit parameter list that requires an Ordering for the contained type A. When you call TreeSet.empty, the compiler will try to implicitly find the correct Ordering[A].
But when you call TreeSet.empty(), the compiler thinks you are trying to explicitly provide the implicit parameter. Except you leave out the parameter in the list, which is a compile error (wrong number of arguments). The only way this will work is if you explicitly pass some Ordering: TreeSet.empty(Ordering.Int).
Side note: Your above code does not actually compile with TreeSet.empty, because it succumbs to an ambiguous implicit error for Ordering. There is probably some implicit Ordering[Int] in your scope that you are not including in the question. It would be better to make the type explicit and use TreeSet.empty[Int].

Related

scala overloading resolution differences between function calls and implicit search

There is a difference in the way the scala 2.13.3 compiler determines which overloaded function to call compared to which overloaded implicit to pick.
object Thing {
trait A;
trait B extends A;
trait C extends A;
def f(a: A): String = "A"
def f(b: B): String = "B"
def f(c: C): String = "C"
implicit val a: A = new A {};
implicit val b: B = new B {};
implicit val c: C = new C {};
}
import Thing._
scala> f(new B{})
val res1: String = B
scala> implicitly[B]
val res2: Thing.B = Thing$$anon$2#2f64f99f
scala> f(new A{})
val res3: String = A
scala> implicitly[A]
^
error: ambiguous implicit values:
both value b in object Thing of type Thing.B
and value c in object Thing of type Thing.C
match expected type Thing.A
As we can see, the overload resolution worked for the function call but not for the implicit pick. Why isn't the implicit offered by val a be chosen as occurs with function calls? If the callers ask for an instance of A why the compilers considers instances of B and C when an instance of A is in scope. There would be no ambiguity if the resolution logic were the same as for function calls.
Edit 2:
The Edit 1 was removed because the assertion I wrote there was wrong.
In response to the comments I added another test to see what happens when the implicit val c: C is removed. In that case the compiler don't complains and picks implicit val b: B despite the caller asked for an instance of A.
object Thing {
trait A { def name = 'A' };
trait B extends A { def name = 'B' };
trait C extends A { def name = 'C' };
def f(a: A): String = "A"
def f(b: B): String = "B"
implicit val a: A = new A {};
implicit val b: B = new B {};
}
import Thing._
scala> f(new A{})
val res0: String = A
scala> implicitly[A].name
val res3: Char = B
So, the overloading resolution of implicit differs from function calls more than I expected.
Anyway, I still don't find a reason why the designers of scala decided to apply a different resolution logic for function and implicit overloading. (Edit: Later noticed why).
Let's see what happens in a real world example.
Suppose we are doing a Json parser that converts a Json string directly to scala Abstract data types, and we want it to support many standard collections.
The snippet in charge of parsing the iterable collections would be something like this:
trait Parser[+A] {
def parse(input: Input): ParseResult;
///// many combinators here
}
implicit def summonParser[T](implicit parserT: Parser[T]) = parserT;
/** #tparam IC iterator type constructor
* #tparam E element's type */
implicit def iterableParser[IC[E] <: Iterable[E], E](
implicit
parserE: Parser[E],
factory: IterableFactory[IC]
): Parser[IC[E]] = '[' ~> skipSpaces ~> (parserE <~ skipSpaces).repSepGen(coma <~ skipSpaces, factory.newBuilder[E]) <~ skipSpaces <~ ']';
Which requires a Parser[E] for the elements and a IterableFactory[IC] to construct the collection specified by the type parameters.
So, we have to put in implicit scope an instance of IterableFactory for every collection type we want to support.
implicit val iterableFactory: IterableFactory[Iterable] = Iterable
implicit val setFactory: IterableFactory[Set] = Set
implicit val listFactory: IterableFactory[List] = List
With the current implicit resolution logic implemented by the scala compiler, this snippet works fine for Set and List, but not for Iterable.
scala> def parserInt: Parser[Int] = ???
def parserInt: read.Parser[Int]
scala> Parser[List[Int]]
val res0: read.Parser[List[Int]] = read.Parser$$anonfun$pursue$3#3958db82
scala> Parser[Vector[Int]]
val res1: read.Parser[Vector[Int]] = read.Parser$$anonfun$pursue$3#648f48d3
scala> Parser[Iterable[Int]]
^
error: could not find implicit value for parameter parserT: read.Parser[Iterable[Int]]
And the reason is:
scala> implicitly[IterableFactory[Iterable]]
^
error: ambiguous implicit values:
both value listFactory in object IterableParser of type scala.collection.IterableFactory[List]
and value vectorFactory in object IterableParser of type scala.collection.IterableFactory[Vector]
match expected type scala.collection.IterableFactory[Iterable]
On the contrary, if the overloading resolution logic of implicits was like the one for function calls, this would work fine.
Edit 3: After many many coffees I noticed that, contrary to what I said above, there is no difference between the way the compiler decides which overloaded functions to call and which overloaded implicit to pick.
In the case of function call: from all the functions overloads such that the type of the argument is asignable to the type of the parameter, the compiler chooses the one such that the function's parameter type is assignable to all the others. If no function satisfies that, a compilation error is thrown.
In the case of implicit pick up: from all the implicit in scope such that the type of the implicit is asignable to the asked type, the compiler chooses the one such that the declared type is asignable to all the others. If no implicit satisfies that, an compilation error is thrown.
My mistake was that I didn't notice the inversion of the assignability.
Anyway, the resolution logic I proposed above (give me what I asked for) is not entirely wrong. It's solves the particular case I mentioned. But for most uses cases the logic implemented by the scala compiler (and, I suppose, all the other languages that support type classes) is better.
As explained in the Edit 3 section of the question, there are similitudes between the way the compiler decides which overloaded functions to call and which overloaded implicit to pick. In both cases the compiler does two steps:
Filters out all the alternatives that are not asignable.
From the remaining alternatives choses the most specific or complains if there is more than one.
In the case of the function call, the most specific alternative is the function with the most specific parameter type; and in the case of implicit pick is the instance with the most specific declared type.
But, if the logic in both cases were exactly the same, then why did the example of the question give different results? Because there is a difference: the assignability requirement that determines which alternatives pass the first step are oposite.
In the case of the function call, after the first step remain the functions whose parameter type is more generic than the argument type; and in the case of implicit pick, remain the instances whose declared type is more specific than the asked type.
The above words are enough to answers the question itself but don't give a solution to the problem that motivated it, which is: How to force the compiler to pick the implicit instance whose declared type is exactly the same than the summoned type? And the answer is: wrapping the implicit instances inside a non variant wrapper.

Scala recursive type and type constructor implementation

I have a situation where I need a method that can take in types:
Array[Int]
Array[Array[Int]]
Array[Array[Array[Int]]]
Array[Array[Array[Array[Int]]]]
etc...
let's call this type RAI for "recursive array of ints"
def make(rai: RAI): ArrayPrinter = { ArrayPrinter(rai) }
Where ArrayPrinter is a class that is initialized with an RAI and iterates through the entire rai (let's say it prints all the values in this Array[Array[Int]])
val arrayOfArray: Array[Array[Int]] = Array(Array(1, 2), Array(3, 4))
val printer: ArrayPrinter[Array[Array[Int]]] = make(arrayOfArray)
printer.print_! // prints "1, 2, 3, 4"
It can also return the original Array[Array[Int]] without losing any type information.
val arr: Array[Array[Int]] = printer.getNestedArray()
How do you implement this in Scala?
Let's first focus on type. According to your definition, a type T should typecheck as an argument for ArrayPrinter is it accepted by the following type function:
def accept[T]: Boolean =
T match { // That's everyday business in agda
case Array[Int] => true
case Array[X] => accept[X]
case _ => false
}
In Scala, you can encode that type function using implicit resolution:
trait RAI[T]
object RAI {
implicit val e0: RAI[Array[Int]] = null
implicit def e1[T](implicit i: RAI[T]): RAI[Array[T]] = null
}
case class ArrayPrinter[T: RAI](getNestedArray: T) // Only compiles it T is a RAI
To print things the simplest solution is to treat the rai: T as a rai: Any:
def print_!: Unit = {
def print0(a: Any): Unit = a match {
case a: Int => println(a)
case a: Array[_] => a.foreach(print0)
case _ => ???
}
}
You could also be fancy and write print_! using type classes, but that would probably be less efficient and take more time to write than the above... Left as an exercise for the reader ;-)
The way this is typically done is by defining an abstract class that contains all the functionality that you would want related to this recursive type, but does not actually take any constructor arguments. Rather, all of its methods take (at least one of) the type as an argument. The canonical example would be Ordering. Define one or more implicit implementations of this class, and then any time you need to use it, accept it as an implicit parameter. The corresponding example would be List's sorted method.
In your case, this might look like:
abstract class ArrayPrinter[A] {
def mkString(a: A): String
}
implicit object BaseArrayPrinter extends ArrayPrinter[Int] {
override def mkString(x: Int) = x.toString
}
class WrappedArrayPrinter[A](wrapped: ArrayPrinter[A]) extends ArrayPrinter[Array[A]] {
override def mkString(xs: Array[A]) = xs.map(wrapped.mkString).mkString(", ")
}
implicit def makeWrappedAP[A](implicit wrapped: ArrayPrinter[A]): ArrayPrinter[Array[A]] = new WrappedArrayPrinter(wrapped)
def printHello[A](xs: A)(implicit printer: ArrayPrinter[A]): Unit = {
println("hello, array: " + printer.mkString(xs))
}
This tends to be a bit cleaner than having that RAIOps class (or ArrayPrinter) take in an object as part of its constructor. That usually leads to more "boxing" and "unboxing", complicated type signatures, strange pattern matching, etc.
It also has the added benefit of being easier to extend. If later someone else has a reason to want an implementation of ArrayPrinter for a Set[Int], they can define it locally to their code. I have many times defined a custom Ordering.

Type parameters cannot be referred in function body in Scala?

I came from C++ world and new to Scala, and this behavior looks unusual.
class G1[A]( val a : A) {
//val c:A = new A //This gives compile error
def fcn1(b: A): Unit = {
//val aobj = new A // This gives compile error
println(a.getClass.getSimpleName)
println(b.getClass.getSimpleName)
}
}
def fcnWithTP[A](): Unit = {
//val a = new A // This gives compile error
//println(a.getClass.getSimpleName)
}
I am not able to crate a object using the type parameter in a class in a function body or a class body. I am only be able to use it in the function parameter.
What is the reason for this? Is this because of type erasure? At run time, the function does not know what the actual type A is, so it cannot create an object of that type?
What is the general rule for this? Does it that mean the type parameter cannot appear in function body or class definition at all? If they can actually appear, what are the examples?
Yes, you're right that this is because of erasure—you don't know anything about A at runtime that you haven't explicitly asserted about it as a constraint in the method signature.
Type erasure on the JVM is only partial, so you can do some horrible things in Scala like ask for the class of a value:
scala> List(1, 2, 3).getClass
res0: Class[_ <: List[Int]] = class scala.collection.immutable.$colon$colon
Once you get to generics, though, everything is erased, so for example you can't tell the following things apart:
scala> List(1, 2, 3).getClass == List("a", "b", "c").getClass
res1: Boolean = true
(In case it's not clear, I think type erasure is unambiguously a good thing, and that the only problem with type erasure on the JVM is that it's not more complete.)
You can write the following:
import scala.reflect.{ ClassTag, classTag }
class G1[A: ClassTag](val a: A) {
val c: A = classTag[A].runtimeClass.newInstance().asInstanceOf[A]
}
And use it like this:
scala> val stringG1: G1[String] = new G1("foo")
stringG1: G1[String] = G1#33d71170
scala> stringG1.c
res2: String = ""
This is a really bad idea, though, since it will crash at runtime for many, many type parameters:
scala> class Foo(i: Int)
defined class Foo
scala> val fooG1: G1[Foo] = new G1(new Foo(0))
java.lang.InstantiationException: Foo
at java.lang.Class.newInstance(Class.java:427)
... 43 elided
Caused by: java.lang.NoSuchMethodException: Foo.<init>()
at java.lang.Class.getConstructor0(Class.java:3082)
at java.lang.Class.newInstance(Class.java:412)
... 43 more
A better approach is to pass in the constructor:
class G1[A](val a: A)(empty: () => A) {
val c: A = empty()
}
And a much better approach is to use a type class:
trait Empty[A] {
def default: A
}
object Empty {
def instance[A](a: => A): Empty[A] = new Empty[A] {
def default: A = a
}
implicit val stringEmpty: Empty[String] = instance("")
implicit val fooEmpty: Empty[Foo] = instance(new Foo(0))
}
class G1[A: Empty](val a: A) {
val c: A = implicitly[Empty[A]].default
}
And then:
scala> val fooG1: G1[Foo] = new G1(new Foo(10101))
fooG1: G1[Foo] = G1#5a34b5bc
scala> fooG1.c
res0: Foo = Foo#571ccdd0
Here we're referring to A in the definition of G1, but we're only making reference to properties and operations that we've confirmed hold or are available at compile time.
Generics are not the same thing as templates. In C++ Foo<Bar> and Foo<Bat> are two different classes, generated at compile time.
In scala or java, Foo[T] is a single class that has with a type parameter. Consider this:
class Foo(val bar)
class Bar[T] {
val foo = new T // if this was possible ...
}
new Bar[Foo]
In C++, (an equivalent of) this would fail to compile, because there is no accessible constructor of Foo that takes no arguments. The compiler would know that when it tried to instantiate a template for Bar<Foo> class, and fail.
In scala, there is no separate class for Bar[Foo], so, at compilation time, the compiler doesn't know anything about T, other than that it is some type. It has no way of knowing whether calling a constructor (or any other method for that matter) is possible or sensible (you can't instantiate a trait for example, or an abstract class), so new T in that context has to fail: it simply does not make sense.
Roughly speaking, you can use type parameters in places where any type can be used (do declare a return type for example, or a variable), but when you are trying to do something that only works for some types, and not for others, you have to make your type param more specific. For example, this: def foo[T](t: T) = t.intValue does not work, but this: def foo[T <: Number](t: T) = t.intValue does.
Well the compiler does not know how to create an instance of type A. You need to either provide a factory function that returns instance of A, or use Manifest which creates instance of A from reflection.
With factory function:
class G1[A](val a:A)(f: () => A) {
val c:A = f()
}
With Manifest:
class G1[A](val a: A)(implicit m: scala.reflect.Manifest[A]) {
val c: A = m.erasure.newInstance.asInstanceOf[A]
}
When using type parameter, usually you will specify more details on the type A, unless you're implementing some sort of container for A that does not directly interact with A. If you need to interact with A, you need some specification on it. You can say A must be a subclass of B
class G1[A <: B](val a : A)
Now compiler would know A is a subclass of B so you can call all functions defined in B on a:A.

Scala - How can I exclude my function's generic type until use?

I have a map of String to Functions which details all of the valid functions that are in a language. When I add a function to my map, I am required to specify the type (in this case Int).
var functionMap: Map[String, (Nothing) => Any] = Map[String, (Nothing) => Any]()
functionMap += ("Neg" -> expr_neg[Int])
def expr_neg[T: Numeric](value: T)(implicit n: Numeric[T]): T = {
n.negate(value)
}
Instead, how can I do something like:
functionMap += ("Neg" -> expr_neg)
without the [Int] and add it in later on when I call:
(unaryFunctionMap.get("abs").get)[Int](-45)
You're trying to build your function using type classes (in this case, Numeric). Type classes rely on implicit parameters. Implicits are resolved at compile time. Your function name string values are only known at runtime, therefore you shouldn't build your solution on top of type classes like this.
An alternative would be to store a separate function object in your map for each parameter type. You could store the parameter type with a TypeTag:
import scala.reflect.runtime.universe._
var functionMap: Map[(String, TypeTag[_]), (Nothing) => Any] = Map()
def addFn[T: TypeTag](name: String, f: T => Any) =
functionMap += ((name, typeTag[T]) -> f)
def callFn[T: TypeTag](name: String, value: T): Any =
functionMap((name, typeTag[T])).asInstanceOf[T => Any](value)
addFn[Int]("Neg", expr_neg)
addFn[Long]("Neg", expr_neg)
addFn[Double]("Neg", expr_neg)
val neg10 = callFn("Neg", 10)
No type class implicit needs to be resolved to call callFn(), because the implicit Numeric was already resolved on the call to addFn.
What happens if we try to resolve the type class when the function is called?
The first problem is that a Function1 (or Function2) can't have implicit parameters. Only a method can. (See this other question for more explanation.) So if you want something that acts like a Function1 but takes an implicit parameter, you'll need to create your own type that defines the apply() method. It has to be a different type from Function1, though.
Now we get to the main problem: all implicits must be able to be resolved at compile time. At the location in code where the method is run, all the type information needed to choose the implicit value needs to be available. In the following code example:
unaryFunctionMap("abs")(-45)
We don't really need to specify that our value type is Int, because it can be inferred from the value -45 itself. But the fact that our method uses a Numeric implicit value can't be inferred from anything in that line of code. We need to specify the use of Numeric somewhere at compile time.
If you can have a separate map for unary functions that take a numeric value, this is (relatively) easy:
trait UnaryNumericFn {
def apply[T](value: T)(implicit n: Numeric[T]): Any
}
var unaryNumericFnMap: Map[String, UnaryNumericFn] = Map()
object expr_neg extends UnaryNumericFn {
override def apply[T](value: T)(implicit n: Numeric[T]): T = n.negate(value)
}
unaryNumericFnMap += ("Neg" -> expr_neg)
val neg3 = unaryNumericFnMap("Neg")(3)
You can make the function trait generic on the type class it requires, letting your map hold unary functions that use different type classes. This requires a cast internally, and moves the specification of Numeric to where the function is finally called:
trait UnaryFn[-E[X]] {
def apply[T](value: T)(implicit ev: E[T]): Any
}
object expr_neg extends UnaryFn[Numeric] {
override def apply[T](value: T)(implicit n: Numeric[T]): T = n.negate(value)
}
var privateMap: Map[String, UnaryFn[Nothing]] = Map()
def putUnary[E[X]](key: String, value: UnaryFn[E]): Unit =
privateMap += (key -> value)
def getUnary[E[X]](key: String): UnaryFn[E] =
privateMap(key).asInstanceOf[UnaryFn[E]]
putUnary("Neg", expr_neg)
val pos5 = getUnary[Numeric]("Neg")(-5)
But you still have to specify Numeric somewhere.
Also, neither of these solutions, as written, support functions that don't need type classes. Being forced to be this explicit about which functions take implicit parameters, and what kinds of implicits they use, starts to defeat the purpose of using implicits in the first place.
You can't. Because expr_neg is a method with a type parameter T and an implicit argument n depending on that parameter. For Scala to lift that method to a function, it needs to capture the implicit, and therefore it must know what kind of type you want.

How to use an implicit at runtime?

First, this is more for experimentation and learning at this point and I know that I can just pass the parameter in directly.
def eval(xs: List[Int], message: => String) = {
xs.foreach{x=>
implicit val z = x
println(message)
}
}
def test()(implicit x : Int) = {
if(x == 1) "1" else "2"
}
eval(List(1, 2), test)//error: could not find implicit value for parameter x
Is this even possible and I am just not using implicits properly for the situation? Or is it not possible at all?
Implicit parameters are resolved at compile time. By-name parameter captures the values it accesses at the scope where it is passed in.
At runtime, there isn't any implicit concept.
eval(List(1, 2), test)
This needs to be fully resolved at compile time. The Scala compiler has to figure out all the parameters it needs to call test. It will try to find out a implicit Int variable at the scope where eval is called. In your case, the implicit value defined in eval won't have any effect at runtime.
How to get an implicit value is always resolved at compile time. There's no such thing as a Function object with an implicit parameter. To get a callable object from a method with implicit parameters, you need to make them explicit. If you really wanted to, you could then wrap that in another method that uses implicits:
def eval(xs: List[Int], message: Int => String) = {
def msg(implicit x: Int) = message(x)
xs.foreach { x =>
implicit val z = x
println(msg)
}
}
eval(List(1, 2), test()(_))
You probably won't gain anything by doing that.
Implicits aren't an alternative to passing in parameters. They're an alternative to explicitly typing in the parameters that you're passing in. The actual passing in, however, works the same way.
I assume that you want the implicit parameter x (in test's signature) to be filled by the implicit variable z (in eval).
In this case, z is out of the scope within which x can see z. Implicit resolution is done statically by compiler, so runtime data flow never affect it. To learn more about the scope, Where do Implicits Come From? in this answer is helpful.
But you can still use implicit for that purpose like this. (I think it is misuse of implicit so only for demonstration.)
var z = 0
implicit def zz: Int = z
def eval(xs: List[Int], message: => String) = {
xs.foreach{ x =>
z = x
println(message)
}
}
def test()(implicit x : Int) = {
if(x == 1) "1" else "2"
}
eval(List(1, 2), test)