How to use an implicit at runtime? - scala

First, this is more for experimentation and learning at this point and I know that I can just pass the parameter in directly.
def eval(xs: List[Int], message: => String) = {
xs.foreach{x=>
implicit val z = x
println(message)
}
}
def test()(implicit x : Int) = {
if(x == 1) "1" else "2"
}
eval(List(1, 2), test)//error: could not find implicit value for parameter x
Is this even possible and I am just not using implicits properly for the situation? Or is it not possible at all?

Implicit parameters are resolved at compile time. By-name parameter captures the values it accesses at the scope where it is passed in.
At runtime, there isn't any implicit concept.
eval(List(1, 2), test)
This needs to be fully resolved at compile time. The Scala compiler has to figure out all the parameters it needs to call test. It will try to find out a implicit Int variable at the scope where eval is called. In your case, the implicit value defined in eval won't have any effect at runtime.

How to get an implicit value is always resolved at compile time. There's no such thing as a Function object with an implicit parameter. To get a callable object from a method with implicit parameters, you need to make them explicit. If you really wanted to, you could then wrap that in another method that uses implicits:
def eval(xs: List[Int], message: Int => String) = {
def msg(implicit x: Int) = message(x)
xs.foreach { x =>
implicit val z = x
println(msg)
}
}
eval(List(1, 2), test()(_))
You probably won't gain anything by doing that.
Implicits aren't an alternative to passing in parameters. They're an alternative to explicitly typing in the parameters that you're passing in. The actual passing in, however, works the same way.

I assume that you want the implicit parameter x (in test's signature) to be filled by the implicit variable z (in eval).
In this case, z is out of the scope within which x can see z. Implicit resolution is done statically by compiler, so runtime data flow never affect it. To learn more about the scope, Where do Implicits Come From? in this answer is helpful.
But you can still use implicit for that purpose like this. (I think it is misuse of implicit so only for demonstration.)
var z = 0
implicit def zz: Int = z
def eval(xs: List[Int], message: => String) = {
xs.foreach{ x =>
z = x
println(message)
}
}
def test()(implicit x : Int) = {
if(x == 1) "1" else "2"
}
eval(List(1, 2), test)

Related

When can parentheses be safely omitted in Scala?

Here is a toy example:
object Example {
import collection.mutable
import collection.immutable.TreeSet
val x = TreeSet(1, 5, 8, 12)
val y = mutable.Set.empty ++= x
val z = TreeSet.empty ++ y
// This gives an error: unspecified parameter
// val z = TreeSet.empty() ++ y
}
Apparently TreeSet.empty and TreeSet.empty() are not the same thing. What's going on under the hood? When can I safely omit (or not omit in this case) the parentheses?
Update
I have sent some code to the console and then deleted it in intellij before eval the above code, here it is:
implicit object StringOrdering extends Ordering[String] {
def compare(o1: String, o2: String) = {
o1.length - o2.length
}
}
object StringOrdering1 extends Ordering[String] {
def compare(o1: String, o2: String) = {
o2.length - o1.length
}
}
This is a special case, and isn't quite relevant to when you can and cannot omit parentheses.
This is the signature for TreeSet.empty:
def empty[A](implicit ordering: Ordering[A]): TreeSet[A]
It has an implicit parameter list that requires an Ordering for the contained type A. When you call TreeSet.empty, the compiler will try to implicitly find the correct Ordering[A].
But when you call TreeSet.empty(), the compiler thinks you are trying to explicitly provide the implicit parameter. Except you leave out the parameter in the list, which is a compile error (wrong number of arguments). The only way this will work is if you explicitly pass some Ordering: TreeSet.empty(Ordering.Int).
Side note: Your above code does not actually compile with TreeSet.empty, because it succumbs to an ambiguous implicit error for Ordering. There is probably some implicit Ordering[Int] in your scope that you are not including in the question. It would be better to make the type explicit and use TreeSet.empty[Int].

Magnet pattern and overloaded methods

There is a significant difference in how Scala resolves implicit conversions from "Magnet Pattern" for non-overloaded and overloaded methods.
Suppose there is a trait Apply (a variation of a "Magnet Pattern") implemented as follows.
trait Apply[A] {
def apply(): A
}
object Apply {
implicit def fromLazyVal[A](v: => A): Apply[A] = new Apply[A] {
def apply(): A = v
}
}
Now we create a trait Foo that has a single apply taking an instance of Apply so we can pass it any value of arbitrary type A since there an implicit conversion from A => Apply[A].
trait Foo[A] {
def apply(a: Apply[A]): A = a()
}
We can make sure it works as expected using REPL and this workaround to de-sugar Scala code.
scala> val foo = new Foo[String]{}
foo: Foo[String] = $anon$1#3a248e6a
scala> showCode(reify { foo { "foo" } }.tree)
res9: String =
$line21$read.foo.apply(
$read.INSTANCE.Apply.fromLazyVal("foo")
)
This works great, but suppose we pass a complex expression (with ;) to the apply method.
scala> val foo = new Foo[Int]{}
foo: Foo[Int] = $anon$1#5645b124
scala> var i = 0
i: Int = 0
scala> showCode(reify { foo { i = i + 1; i } }.tree)
res10: String =
$line23$read.foo.apply({
$line24$read.`i_=`($line24$read.i.+(1));
$read.INSTANCE.Apply.fromLazyVal($line24$read.i)
})
As we can see, an implicit conversion has been applied only on the last part of the complex expression (i.e., i), not to the whole expression. So, i = i + 1 was strictly evaluated at the moment we pass it to an apply method, which is not what we've been expecting.
Good (or bad) news. We can make scalac to use the whole expression in the implicit conversion. So i = i + 1 will be evaluated lazily as expected. To do so we (surprize, surprize!) we add an overload method Foo.apply that takes any type, but not Apply.
trait Foo[A] {
def apply(a: Apply[A]): A = a()
def apply(s: Symbol): Foo[A] = this
}
And then.
scala> var i = 0
i: Int = 0
scala> val foo = new Foo[Int]{}
foo: Foo[Int] = $anon$1#3ff00018
scala> showCode(reify { foo { i = i + 1; i } }.tree)
res11: String =
$line28$read.foo.apply($read.INSTANCE.Apply.fromLazyVal({
$line27$read.`i_=`($line27$read.i.+(1));
$line27$read.i
}))
As we can see, the entire expression i = i + 1; i made it under the implicit conversion as expected.
So my question is why is that? Why the scope of which an implicit conversion is applied depends on the fact whether or not there is an overloaded method in the class.
Now, that is a tricky one. And it's actually pretty awesome, I didn't know that "workaround" to the "lazy implicit does not cover full block" problem. Thanks for that!
What happens is related to expected types, and how they affect type inference works, implicit conversions, and overloads.
Type inference and expected types
First, we have to know that type inference in Scala is bi-directional. Most of the inference works bottom-up (given a: Int and b: Int, infer a + b: Int), but some things are top-down. For example, inferring the parameter types of a lambda is top-down:
def foo(f: Int => Int): Int = f(42)
foo(x => x + 1)
In the second line, after resolving foo to be def foo(f: Int => Int): Int, the type inferencer can tell that x must be of type Int. It does so before typechecking the lambda itself. It propagates type information from the function application down to the lambda, which is a parameter.
Top-down inference basically relies on the notion of expected type. When typechecking a node of the AST of the program, the typechecker does not start empty-handed. It receives an expected type from "above" (in this case, the function application node). When typechecking the lambda x => x + 1 in the above example, the expected type is Int => Int, because we know what parameter type is expected by foo. This drives the type inference into inferring Int for the parameter x, which in turn allows to typecheck x + 1.
Expected types are propagated down certain constructs, e.g., blocks ({}) and the branches of ifs and matches. Hence, you could also call foo with
foo({
val y = 1
x => x + y
})
and the typechecker is still able to infer x: Int. That is because, when typechecking the block { ... }, the expected type Int => Int is passed down to the typechecking of the last expression, i.e., x => x + y.
Implicit conversions and expected types
Now, we have to introduce implicit conversions into the mix. When typechecking a node produces a value of type T, but the expected type for that node is U where T <: U is false, the typechecker looks for an implicit T => U (I'm probably simplifying things a bit here, but the gist is still true). This is why your first example does not work. Let us look at it closely:
trait Foo[A] {
def apply(a: Apply[A]): A = a()
}
val foo = new Foo[Int] {}
foo({
i = i + 1
i
})
When calling foo.apply, the expected type for the parameter (i.e., the block) is Apply[Int] (A has already been instantiated to Int). We can "write" this typechecker "state" like this:
{
i = i + 1
i
}: Apply[Int]
This expected type is passed down to the last expression of the block, which gives:
{
i = i + 1
(i: Apply[Int])
}
at this point, since i: Int and the expected type is Apply[Int], the typechecker finds the implicit conversion:
{
i = i + 1
fromLazyVal[Int](i)
}
which causes only i to be lazified.
Overloads and expected types
OK, time to throw overloads in there! When the typechecker sees an application of an overload method, it has much more trouble deciding on an expected type. We can see that with the following example:
object Foo {
def apply(f: Int => Int): Int = f(42)
def apply(f: String => String): String = f("hello")
}
Foo(x => x + 1)
gives:
error: missing parameter type
Foo(x => x + 1)
^
In this case, the failure of the typechecker to figure out an expected type causes the parameter type not to be inferred.
If we take your "solution" to your issue, we have a different consequence:
trait Foo[A] {
def apply(a: Apply[A]): A = a()
def apply(s: Symbol): Foo[A] = this
}
val foo = new Foo[Int] {}
foo({
i = i + 1
i
})
Now when typechecking the block, the typechecker has no expected type to work with. It will therefore typecheck the last expression without expression, and eventually typecheck the whole block as an Int:
{
i = i + 1
i
}: Int
Only now, with an already typechecked argument, does it try to resolve the overloads. Since none of the overloads conforms directly, it tries to apply an implicit conversion from Int to either Apply[Int] or Symbol. It finds fromLazyVal[Int], which it applies to the entire argument. It does not push it inside the block anymore, giving:
fromLazyVal({
i = i + 1
i
}): Apply[Int]
In this case, the whole block is lazified.
This concludes the explanation. To summarize, the major difference is the presence vs absence of an expected type when typechecking the block. With an expected type, the implicit conversion is pushed down as much as possible, down to just i. Without the expected type, the implicit conversion is applied a posteriori on the entire argument, i.e., the whole block.

Scala - How can I exclude my function's generic type until use?

I have a map of String to Functions which details all of the valid functions that are in a language. When I add a function to my map, I am required to specify the type (in this case Int).
var functionMap: Map[String, (Nothing) => Any] = Map[String, (Nothing) => Any]()
functionMap += ("Neg" -> expr_neg[Int])
def expr_neg[T: Numeric](value: T)(implicit n: Numeric[T]): T = {
n.negate(value)
}
Instead, how can I do something like:
functionMap += ("Neg" -> expr_neg)
without the [Int] and add it in later on when I call:
(unaryFunctionMap.get("abs").get)[Int](-45)
You're trying to build your function using type classes (in this case, Numeric). Type classes rely on implicit parameters. Implicits are resolved at compile time. Your function name string values are only known at runtime, therefore you shouldn't build your solution on top of type classes like this.
An alternative would be to store a separate function object in your map for each parameter type. You could store the parameter type with a TypeTag:
import scala.reflect.runtime.universe._
var functionMap: Map[(String, TypeTag[_]), (Nothing) => Any] = Map()
def addFn[T: TypeTag](name: String, f: T => Any) =
functionMap += ((name, typeTag[T]) -> f)
def callFn[T: TypeTag](name: String, value: T): Any =
functionMap((name, typeTag[T])).asInstanceOf[T => Any](value)
addFn[Int]("Neg", expr_neg)
addFn[Long]("Neg", expr_neg)
addFn[Double]("Neg", expr_neg)
val neg10 = callFn("Neg", 10)
No type class implicit needs to be resolved to call callFn(), because the implicit Numeric was already resolved on the call to addFn.
What happens if we try to resolve the type class when the function is called?
The first problem is that a Function1 (or Function2) can't have implicit parameters. Only a method can. (See this other question for more explanation.) So if you want something that acts like a Function1 but takes an implicit parameter, you'll need to create your own type that defines the apply() method. It has to be a different type from Function1, though.
Now we get to the main problem: all implicits must be able to be resolved at compile time. At the location in code where the method is run, all the type information needed to choose the implicit value needs to be available. In the following code example:
unaryFunctionMap("abs")(-45)
We don't really need to specify that our value type is Int, because it can be inferred from the value -45 itself. But the fact that our method uses a Numeric implicit value can't be inferred from anything in that line of code. We need to specify the use of Numeric somewhere at compile time.
If you can have a separate map for unary functions that take a numeric value, this is (relatively) easy:
trait UnaryNumericFn {
def apply[T](value: T)(implicit n: Numeric[T]): Any
}
var unaryNumericFnMap: Map[String, UnaryNumericFn] = Map()
object expr_neg extends UnaryNumericFn {
override def apply[T](value: T)(implicit n: Numeric[T]): T = n.negate(value)
}
unaryNumericFnMap += ("Neg" -> expr_neg)
val neg3 = unaryNumericFnMap("Neg")(3)
You can make the function trait generic on the type class it requires, letting your map hold unary functions that use different type classes. This requires a cast internally, and moves the specification of Numeric to where the function is finally called:
trait UnaryFn[-E[X]] {
def apply[T](value: T)(implicit ev: E[T]): Any
}
object expr_neg extends UnaryFn[Numeric] {
override def apply[T](value: T)(implicit n: Numeric[T]): T = n.negate(value)
}
var privateMap: Map[String, UnaryFn[Nothing]] = Map()
def putUnary[E[X]](key: String, value: UnaryFn[E]): Unit =
privateMap += (key -> value)
def getUnary[E[X]](key: String): UnaryFn[E] =
privateMap(key).asInstanceOf[UnaryFn[E]]
putUnary("Neg", expr_neg)
val pos5 = getUnary[Numeric]("Neg")(-5)
But you still have to specify Numeric somewhere.
Also, neither of these solutions, as written, support functions that don't need type classes. Being forced to be this explicit about which functions take implicit parameters, and what kinds of implicits they use, starts to defeat the purpose of using implicits in the first place.
You can't. Because expr_neg is a method with a type parameter T and an implicit argument n depending on that parameter. For Scala to lift that method to a function, it needs to capture the implicit, and therefore it must know what kind of type you want.

Scala and type bound with a given operation

Is it possible to define a parameter x in a method, such that the type of x is a generic type T that implements a given function signature (let us say def apply() : Double), without introducing a new type?
[Example] The goal is to define something like (I am using an adhoc syntax just for the sake of illustration):
def foo(x : T with def apply() : Double) = { ... }
Currently, I could introduce a new type ApplyDouble, but that would require me extending all possible types whose instances are legal parameters to 'foo', and foo's signature would then be turned into
def foo(x : ApplyDouble) = { ... }
Sure, it's possible with a structural type, and you've even almost got the syntax right:
def foo(x: { def apply(): Double }) = x.apply
And then:
scala> foo(() => 13.0)
res0: Double = 13.0
Or:
scala> foo(new { def apply() = 42.0 })
res1: Double = 42.0
The definition of foo will give you a warning about reflective access, which you can avoid by adding an import or compiler option (as described in the warning message).
Note that there is some overhead involved in calling methods on a structural type, so if you need this in a tight inner loop you may want to rethink your approach a bit. In most cases, though, it probably won't make a noticeable difference.

Test if implicit conversion is available

I am trying to detect if an implicit conversion exists, and depending on it, to execute some code. For instance :
if (x can-be-converted-to SomeType)
return something(conversion(x))
else
return someotherthing(x)
For instance, x is an Int and should be converted to a RichInt.
Is this possible in Scala? If yes, how?
Thanks
As others already mentioned implicits are resolved at compile time so maybe you should rather use type classes to solve problems like this. That way you have the advantage that you can extend functionality to other types later on.
Also you can just require an existing implicit value but have no way of directly expressing non-existence of an implicit value except for the default arguments.
Jean-Phiippe's solution using a default argument is already quite good but the null could be eliminated if you define a singleton that can be put in place of the implicit parameter. Make it private because it is actually of no use in other code and can even be dangerous as implicit conversions can happen implicitly.
private case object NoConversion extends (Any => Nothing) {
def apply(x: Any) = sys.error("No conversion")
}
// Just for convenience so NoConversion does not escape the scope.
private def noConversion: Any => Nothing = NoConversion
// and now some convenience methods that can be safely exposed:
def canConvert[A,B]()(implicit f: A => B = noConversion) =
(f ne NoConversion)
def tryConvert[A,B](a: A)(implicit f: A => B = noConversion): Either[A,B] =
if (f eq NoConversion) Left(a) else Right(f(a))
def optConvert[A,B](a: A)(implicit f: A => B = noConversion): Option[B] =
if (f ne NoConversion) Some(f(a)) else None
You can try to pass it to a method that needs the corresponding implicit parameter with a default of null:
def toB[A](a: A)(implicit convertFct: A => B = null) =
if (convertFct != null)
convertFct(a)
else
someOtherThing(a)
Note that it looks curious to me to check this at runtime, because the compiler knows at compile time whether such a conversion function is available.