Scala: implicit conversion to generate method values? - scala

If I have the following class in Scala:
class Simple {
def doit(a: String): Int = 42
}
And an instance of that class
o = new Simple()
Is possible to define an implicit conversion that will morph this instance and a method known at compile into a tuple like this?
Tuple2 (o, (_: Simple).doit _)
I was hoping I could come up with one for registering function callbacks in the spirit of:
doThisLater (o -> 'doit)
I functionally have my function callbacks working based on retronym's answer to a previous SO question, but it'd be great to add this thick layer of syntactic sugar.

You can just eta-expand the method. Sample REPL session,
scala> case class Deferred[T, R](f : T => R)
defined class Deferred
scala> def doThisLater[T, R](f : T => R) = Deferred(f)
doThisLater: [T, R](f: T => R)Deferred[T,R]
scala> val deferred = doThisLater(o.doit _) // eta-convert doit
deferred: Deferred[String,Int] = Deferred(<function1>)
scala> deferred.f("foo")
res0: Int = 42

Related

How to compose monadic functions/methods of type Function1 via 'andThen' in Scala?

I'm trying to compose functions of types f: a -> Either[_,b] with each other but nothing seems to work. I tried extending Function1 with implicits like below but keep getting compilation errors:
import scala.language.implicitConversions
object MyExtensions {
implicit class AndThenEither[A,B](val e: Function1[A,Either[_,B]]) {
def andThen[C](f:Function1[B, Either[_,C]]): Function1[A, Either[_,C]] = {
(v1: A) => e.apply(v1).flatMap(b => f.apply(b)) //type: A => Either[_,C]
}
}
}
However the below fails:
object Sample extends App {
case class TenX(t: Int)
case class DoubleX(x: Int)
def composeFunction1Test(): Unit = {
val func1: (Int) => Either[String, TenX] = (i: Int) => Right(TenX(10 * i))
val func2: (TenX) => Either[String, DoubleX] = (t: TenX) => Right(DoubleX(t.t * 2))
val pipeline = func1 andThen func2
val result = pipeline(1); //Expected Right(DoubleX(20))
}
}
This keeps complaining about a type mismatch and I can't figure out what's wrong. Also, if I try to use it with def defined methods (instead of val like above) it still fails compilation.
Question: How can I compose such Either typed functions with an equivalent andThen operator/function to be used across any such function or class method to allow for such chaining?
Since andThen is already defined on Function1 then implicit resolution will not kick in to resolve extension method of the same name. Try renaming the extension method
implicit class AndThenEither ... {
def fooAndThen ...
}
func1 fooAndThen func2 // should work
Kleisli is indeed a way of composing functions returning monadic value.

Scala call-by-name constructor parameter in implicit class

The following code does not compile. Desired is to have a call-by-name constructor parameter in an implicit class as illustrated here,
def f(n: Int) = (1 to n) product
implicit class RichElapsed[A](val f: => A) extends AnyVal {
def elapsed(): (A, Double) = {
val start = System.nanoTime()
val res = f
val end = System.nanoTime()
(res, (end-start)/1e6)
}
}
where a call
val (res, time) = f(3).elapsed
res: Int = 6
time: Double = 123.0
This error is reported in REPL,
<console>:1: error: `val' parameters may not be call-by-name
implicit class RichElapsed[A](val f: => A) extends AnyVal {
Thus to ask how RichElapsed class could be refactored.
Thanks in Advance.
Peter Schmitz's solution to simply drop the val (along with the hope of turning RichElapsed into a value class) is certainly the simplest and least intrusive thing to do.
If you really feel like you need a value class, another alternative is this:
class RichElapsed[A](val f: () => A) extends AnyVal {
def elapsed(): (A, Double) = {
val start = System.nanoTime()
val res = f()
val end = System.nanoTime()
(res, (end-start)/1e6)
}
}
implicit def toRichElapsed[A]( f: => A ) = new RichElapsed[A](() => f )
Note that while using a value class as above allows to remove the instantiation of a temporary RichElapsed instance, there is still some wrapping going on (both with my solution and with Peter Schmitz's solution).
Namely, the body passed by name as f is wrapped into a function instance (in Peter Schmitz's case this is not apparent in the code but will happen anyway under the hood).
If you want to remove this wrapping too, I believe the only solution would be to use a macro.
Do it without val as the error messages demands and then you also have to abandon the AnyVal since a value class needs to have exactly one public val:
implicit class RichElapsed[A](f: => A)

In Scala, what does "extends (A => B)" on a case class mean?

In researching how to do Memoization in Scala, I've found some code I didn't grok. I've tried to look this particular "thing" up, but don't know by what to call it; i.e. the term by which to refer to it. Additionally, it's not easy searching using a symbol, ugh!
I saw the following code to do memoization in Scala here:
case class Memo[A,B](f: A => B) extends (A => B) {
private val cache = mutable.Map.empty[A, B]
def apply(x: A) = cache getOrElseUpdate (x, f(x))
}
And it's what the case class is extending that is confusing me, the extends (A => B) part. First, what is happening? Secondly, why is it even needed? And finally, what do you call this kind of inheritance; i.e. is there some specific name or term I can use to refer to it?
Next, I am seeing Memo used in this way to calculate a Fibanocci number here:
val fibonacci: Memo[Int, BigInt] = Memo {
case 0 => 0
case 1 => 1
case n => fibonacci(n-1) + fibonacci(n-2)
}
It's probably my not seeing all of the "simplifications" that are being applied. But, I am not able to figure out the end of the val line, = Memo {. So, if this was typed out more verbosely, perhaps I would understand the "leap" being made as to how the Memo is being constructed.
Any assistance on this is greatly appreciated. Thank you.
A => B is short for Function1[A, B], so your Memo extends a function from A to B, most prominently defined through method apply(x: A): B which must be defined.
Because of the "infix" notation, you need to put parentheses around the type, i.e. (A => B). You could also write
case class Memo[A, B](f: A => B) extends Function1[A, B] ...
or
case class Memo[A, B](f: Function1[A, B]) extends Function1[A, B] ...
To complete 0_'s answer, fibonacci is being instanciated through the apply method of Memo's companion object, generated automatically by the compiler since Memo is a case class.
This means that the following code is generated for you:
object Memo {
def apply[A, B](f: A => B): Memo[A, B] = new Memo(f)
}
Scala has special handling for the apply method: its name needs not be typed when calling it. The two following calls are strictly equivalent:
Memo((a: Int) => a * 2)
Memo.apply((a: Int) => a * 2)
The case block is known as pattern matching. Under the hood, it generates a partial function - that is, a function that is defined for some of its input parameters, but not necessarily all of them. I'll not go in the details of partial functions as it's beside the point (this is a memo I wrote to myself on that topic, if you're keen), but what it essentially means here is that the case block is in fact an instance of PartialFunction.
If you follow that link, you'll see that PartialFunction extends Function1 - which is the expected argument of Memo.apply.
So what that bit of code actually means, once desugared (if that's a word), is:
lazy val fibonacci: Memo[Int, BigInt] = Memo.apply(new PartialFunction[Int, BigInt] {
override def apply(v: Int): Int =
if(v == 0) 0
else if(v == 1) 1
else fibonacci(v - 1) + fibonacci(v - 2)
override isDefinedAt(v: Int) = true
})
Note that I've vastly simplified the way the pattern matching is handled, but I thought that starting a discussion about unapply and unapplySeq would be off topic and confusing.
I am the original author of doing memoization this way. You can see some sample usages in that same file. It also works really well when you want to memoize on multiple arguments too because of the way Scala unrolls tuples:
/**
* #return memoized function to calculate C(n,r)
* see http://mathworld.wolfram.com/BinomialCoefficient.html
*/
val c: Memo[(Int, Int), BigInt] = Memo {
case (_, 0) => 1
case (n, r) if r > n/2 => c(n, n-r)
case (n, r) => c(n-1, r-1) + c(n-1, r)
}
// note how I can invoke a memoized function on multiple args too
val x = c(10, 3)
This answer is a synthesis of the partial answers provided by both 0__ and Nicolas Rinaudo.
Summary:
There are many convenient (but also highly intertwined) assumptions being made by the Scala compiler.
Scala treats extends (A => B) as synonymous with extends Function1[A, B] (ScalaDoc for Function1[+T1, -R])
A concrete implementation of Function1's inherited abstract method apply(x: A): B must be provided; def apply(x: A): B = cache.getOrElseUpdate(x, f(x))
Scala assumes an implied match for the code block starting with = Memo {
Scala passes the content between {} started in item 3 as a parameter to the Memo case class constructor
Scala assumes an implied type between {} started in item 3 as PartialFunction[Int, BigInt] and the compiler uses the "match" code block as the override for the PartialFunction method's apply() and then provides an additional override for the PartialFunction's method isDefinedAt().
Details:
The first code block defining the case class Memo can be written more verbosely as such:
case class Memo[A,B](f: A => B) extends Function1[A, B] { //replaced (A => B) with what it's translated to mean by the Scala compiler
private val cache = mutable.Map.empty[A, B]
def apply(x: A): B = cache.getOrElseUpdate(x, f(x)) //concrete implementation of unimplemented method defined in parent class, Function1
}
The second code block defining the val fibanocci can be written more verbosely as such:
lazy val fibonacci: Memo[Int, BigInt] = {
Memo.apply(
new PartialFunction[Int, BigInt] {
override def apply(x: Int): BigInt = {
x match {
case 0 => 0
case 1 => 1
case n => fibonacci(n-1) + fibonacci(n-2)
}
}
override def isDefinedAt(x: Int): Boolean = true
}
)
}
Had to add lazy to the second code block's val in order to deal with a self-referential problem in the line case n => fibonacci(n-1) + fibonacci(n-2).
And finally, an example usage of fibonacci is:
val x:BigInt = fibonacci(20) //returns 6765 (almost instantly)
One more word about this extends (A => B): the extends here is not required, but necessary if the instances of Memo are to be used in higher order functions or situations alike.
Without this extends (A => B), it's totally fine if you use the Memo instance fibonacci in just method calls.
case class Memo[A,B](f: A => B) {
private val cache = scala.collection.mutable.Map.empty[A, B]
def apply(x: A):B = cache getOrElseUpdate (x, f(x))
}
val fibonacci: Memo[Int, BigInt] = Memo {
case 0 => 0
case 1 => 1
case n => fibonacci(n-1) + fibonacci(n-2)
}
For example:
Scala> fibonacci(30)
res1: BigInt = 832040
But when you want to use it in higher order functions, you'd have a type mismatch error.
Scala> Range(1, 10).map(fibonacci)
<console>:11: error: type mismatch;
found : Memo[Int,BigInt]
required: Int => ?
Range(1, 10).map(fibonacci)
^
So the extends here only helps to ID the instance fibonacci to others that it has an apply method and thus can do some jobs.

How two abstract the number of parameters of a function with type parameters in Scala?

There is a Wrapper class for arbitrary functions. I tried to abstract the input and output (return value) of the function with the two type parameters [I, O] (for input and output).
class Wrapper[I, O](protected val f: I => O {
protected def doIt(input: I): O = f(input)
}
As this should be a wrapper for arbitrary functions, I have a problem with functions that take multiple parameters.
val multiplyFunction = (a: Int, b: Int) => a * b
val multiplyWrapper = new Wrapper[(Int, Int), Int](multiplyFunction)
The second line does not compile, because the wrapper expects a function which takes a Tuple with two Ints as the only parameter.
Is there a way to rewrite this, so that I can abstract the function's parameters no matter how many parameters there are? Ideally the solution would be type safe, by the compiler.
Maybe I there is an alternative to using a tuple to specify the types for the wrapper when creating an instance it.
I hope I don't have to write it like the Tuple classe Tuple2 to TupleN or Function2 to FunctionN. I don't know all the details about this, but that does look more like a workaround and is not a abstract / generic solution.
You could use tupled method on function: new Wrapper(multiplyFunction.tupled).
If you want to make this transparent to the wrapper class's user you could use duck typing:
object Wrapper {
def apply[I, O](e: { def tupled: I => O }) = new Wrapper(e.tupled)
def apply[I, O](e: I => O) = new Wrapper(e)
}
scala> Wrapper( (a: Int) => a )
res0: Wrapper[Int,Int] = Wrapper#29d03e78
scala> Wrapper( (a: Int, b: Int) => a * b )
res1: Wrapper[(Int, Int),Int] = Wrapper#581cdfc2
You'll get some overhead due to reflection.

Why is reference to overloaded definition ambiguous when types are known?

I have a function like so:
def ifSome[B, _](pairs:(Option[B], B => _)*) {
for((paramOption, setFunc) <- pairs)
for(someParam <- paramOption) setFunc(someParam)
}
and overloaded functions like these:
class Foo{
var b=""
def setB(b:String){this.b = b}
def setB(b:Int){this.b = b.toString}
}
val f = new Foo
then the following line produces an error:
ifSome(Option("hi") -> f.setB _)
<console>:11: error: ambiguous reference to overloaded definition,
both method setB in class Foo of type (b: Int)Unit
and method setB in class Foo of type (b: String)Unit
match expected type ?
ifSome(Option("hi") -> f.setB _)
But the compiler knows that we're looking for a Function1[java.lang.String, _], so why should the presence of a Function1[Int, _] present any confusion? Am I missing something or is this a compiler bug (or perhaps it should be a feature request)?
I was able to workaround this by using a type annotation like so
ifSome(Option("hi") -> (f.setB _:String=>Unit))
but I'd like to understand why this is necessary.
You'll want to try $ scalac -Ydebug -Yinfer-debug x.scala but first you'll want to minimize.
In this case, you'll see how in the curried version, B is solved in the first param list:
[infer method] solving for B in (bs: B*)(bfs: Function1[B, _]*)Nothing
based on (String)(bfs: Function1[B, _]*)Nothing (solved: B=String)
For the uncurried version, you'll see some strangeness around
[infer view] <empty> with pt=String => Int
as it tries to disambiguate the overload, which may lead you to the weird solution below.
The dummy implicit serves the sole purpose of resolving the overload so that inference can get on with it. The implicit itself is unused and remains unimplemented???
That's a pretty weird solution, but you know that overloading is evil, right? And you've got to fight evil with whatever tools are at your disposal.
Also see that your type annotation workaround is more laborious than just specifying the type param in the normal way.
object Test extends App {
def f[B](pairs: (B, B => _)*) = ???
def f2[B](bs: B*)(bfs: (B => _)*) = ???
def g(b: String) = ???
def g(b: Int) = ???
// explicitly
f[String](Pair("hi", g _))
// solves for B in first ps
f2("hi")(g _)
// using Pair instead of arrow means less debug output
//f(Pair("hi", g _))
locally {
// unused, but selects g(String) and solves B=String
import language.implicitConversions
implicit def cnv1(v: String): Int = ???
f(Pair("hi", g _))
}
// a more heavy-handed way to fix the type
class P[A](a: A, fnc: A => _)
class PS(a: String, fnc: String => _) extends P[String](a, fnc)
def p[A](ps: P[A]*) = ???
p(new PS("hi", g _))
}
Type inference in Scala only works from one parameter list to the next. Since your ifSome only has one parameter list, Scala won't infer anything. You can change ifSome as follows:
def ifSome[B, _](opts:Option[B]*)(funs: (B => _)*) {
val pairs = opts.zip(funs)
for((paramOption, setFunc) <- pairs)
for(someParam <- paramOption) setFunc(someParam)
}
leave Foo as it is...
class Foo{
var b=""
def setB(b:String){this.b = b}
def setB(b:Int){this.b = b.toString}
}
val f = new Foo
And change the call to ifSome accordingly:
ifSome(Option("hi"))(f.setB _)
And it all works. Now of course you have to check whether opts and funs have the same length at runtime.