Scala call-by-name constructor parameter in implicit class - scala

The following code does not compile. Desired is to have a call-by-name constructor parameter in an implicit class as illustrated here,
def f(n: Int) = (1 to n) product
implicit class RichElapsed[A](val f: => A) extends AnyVal {
def elapsed(): (A, Double) = {
val start = System.nanoTime()
val res = f
val end = System.nanoTime()
(res, (end-start)/1e6)
}
}
where a call
val (res, time) = f(3).elapsed
res: Int = 6
time: Double = 123.0
This error is reported in REPL,
<console>:1: error: `val' parameters may not be call-by-name
implicit class RichElapsed[A](val f: => A) extends AnyVal {
Thus to ask how RichElapsed class could be refactored.
Thanks in Advance.

Peter Schmitz's solution to simply drop the val (along with the hope of turning RichElapsed into a value class) is certainly the simplest and least intrusive thing to do.
If you really feel like you need a value class, another alternative is this:
class RichElapsed[A](val f: () => A) extends AnyVal {
def elapsed(): (A, Double) = {
val start = System.nanoTime()
val res = f()
val end = System.nanoTime()
(res, (end-start)/1e6)
}
}
implicit def toRichElapsed[A]( f: => A ) = new RichElapsed[A](() => f )
Note that while using a value class as above allows to remove the instantiation of a temporary RichElapsed instance, there is still some wrapping going on (both with my solution and with Peter Schmitz's solution).
Namely, the body passed by name as f is wrapped into a function instance (in Peter Schmitz's case this is not apparent in the code but will happen anyway under the hood).
If you want to remove this wrapping too, I believe the only solution would be to use a macro.

Do it without val as the error messages demands and then you also have to abandon the AnyVal since a value class needs to have exactly one public val:
implicit class RichElapsed[A](f: => A)

Related

What is the difference between MyClass and MyClass.this.type and how to turn one to another?

I have a situation similar to this:
trait Abst{
type T
def met1(p: T) = p.toString
def met2(p: T, f: Double=>this.type){
val v = f(1.0)
v.met1(p)
}
}
class MyClass(x: Double) extends Abst{
case class Param(a:Int)
type T = Param
val s = met2(Param(1), (d: Double) => new MyClass(d))
}
And it doesn't show errors until I run it, then it says:
type mismatch; found: MyClass, required:
MyClass.this.type
I tried also a solution with generic type but then I have conflict that this.T differs from v.T.
So I just need to overcome the error message above if possible?
Update
So, it turns out that this.type is the singleton type for that single instance. And I proposed in a comment usage of
val s = met2(Param(1), (d: Double) => (new MyClass(d)).asInstanceOf[this.type])
So just if someone would make a comment about this, I am aware how ugly it is, just interested how unsafe it is?
Also you all proposed to move definition of Param outside the class, which I definitely agree. So its definition will be in the companion object MyClass
The this.type is the singleton type that is inhabited by one single value, namely this. Therefore, accepting a function of type f: X => this.type as an argument is guaranteed to be nonsensical, because every invocation of f can just be replaced by this (plus the side-effects performed by f).
Here is a way of forcing your code to compile with minimal changes:
trait Abst { self =>
type T
def met1(p: T) = p.toString
def met2(p: T, f: Double => Abst { type T = self.T }){
val v = f(1.0)
v.met1(p)
}
}
case class Param(a:Int)
class MyClass(x: Double) extends Abst {
type T = Param
val s = met2(Param(1), (d: Double) => new MyClass(d))
}
But honestly: don't do it. And also don't do any of the F-bounded-stuff either, it will probably end up as a total mess, especially if you are not familiar with the pattern. Instead, refactor your code so that you don't have any self-referential spirals in it.
Update
Remark on why telling to the compiler that (new MyClass(d)) is of type this.type for some other this: MyClass is a really bad idea:
abstract class A {
type T
val x: T
val f: T => Unit
def blowup(a: A): Unit = a.asInstanceOf[this.type].f(x)
}
object A {
def apply[X](point: X, function: X => Unit): A = new A {
type T = X
val x = point
val f = function
}
}
val a = A("hello", (s: String) => println(s.size))
val b = A(42, (n: Int) => println(n + 58))
b.blowup(a)
This blows up with a ClassCastException, despite a and b both being of type A.
If you don't mind making the trait take T as a generic parameter, this is a fairly simple and straightforward equivalent solution:
trait Abst[T]{
def met1(p: T) = p.toString
def met2(p: T, f: Double=>Abst[T]){
val v = f(1.0)
v.met1(p)
}
}
case class Param(a:Int)
class MyClass(x: Double) extends Abst[Param]{
val s = met2(Param(1), (d: Double) => new MyClass(d))
}
I say it's equivalent because you're not losing any information by having met2 use the supertype instead of the subtype. The classic use case for referencing the subtype in a trait is e.g. having a method which you want to return MyClass instead of Abst even though it's defined in Abst, but that's not the situation you're in here. The only place your subtype reference is used is in the definition of f, and since function types are covariant on their output parameter you can pass any f: Double => MyClass into an f: Double => Abst[T] without problems.
If you do want to reference the subtype anyway, see Markus' answer... and if you do want to avoid having T be a generic parameter, things get a lot more complicated again, because now you have potential conflicts between the T of Abst versus the T of the subtype in the definition of met2.
To overcome this error message you have to use F-bounded polymorphism.
Your code will look somewhat like this:
trait Abst[F <: Abst[F, T], T]{ self: F =>
def met1(p: T): String = p.toString
def met2(p: T, f: Double => F): String = {
val v = f(1.0)
v.met1(p)
}
}
case class Param(a:Int)
class MyClass(x: Double) extends Abst[MyClass, Param] {
val s = met2(Param(1), (d: Double) => new MyClass(d))
}
Explanation:
Using self: F => inside of a trait or class definition constraints the value of this. So your code will not compile if this is not of type F.
We are using a cyclic type constraint of F: F <: Abst[F, T]. Though counterintuitive, the compiler does not mind it.
In the implementation, MyClass, we then extend MyClass with Abst[MyClass, Param], which in turn satisfies F <: Abst[F, T].
Now you can use F as return type of a function in Abst and have the MyClass return MyClass in the implementation.
You might think this solution is ugly, and if you do, then you're right.
Instead of using F-bounded polymorphism it's always recommended to use typeclasses for ad-hoc-polymorphism.
You will find more information about it in the link I provided earlier.
Really, read it. It will change your view on generic programming forever.
I hope this helps.

How to use sequence from scalaz to transform T[G[A]] to G[T[A]]

I have this code to transform List[Future[Int]] to Future[List[Int]] by using scalaz sequence.
import scalaz.concurrent.Future
val t = List(Future.now(1), Future.now(2), Future.now(3)) //List[Future[Int]]
val r = t.sequence //Future[List[Int]]
because I am using Future from scalaz, so it may have implicit resolution to do the magic for me, I just wonder if the type class is custom class not predefined one like Future, how can I define implicit resolution to achieve the same result
case class Foo(x: Int)
val t = List(Foo(1), Foo(2), Foo(3)) //List[Foo[Int]]
val r = t.sequence //Foo[List[Int]]
Many thanks in advance
You need to create an Applicative[Foo] (or a Monad[Foo]) which is in implicit scope.
What you are asking for exactly won't work, since your Foo is not universally quantified (so your expected type for r doesn't make sense as Foo[List[Int]], since Foo doesn't take a type parameter.
Lets define Foo differently:
case class Foo[A](a: A)
object Foo {
implicit val fooApplicative = new Applicative[Foo] {
override def point[A](a: => A) = Foo(a)
override def ap[A,B](fa: => Foo[A])(f: => Foo[A=>B]): Foo[B] = Foo(f.a(fa.a))
}
}
Now if we make sure this implicit is in scope, we can sequence:
scala> val t = List(Foo(1), Foo(2), Foo(3))
t: List[Foo[Int]] = List(Foo(1), Foo(2), Foo(3))
scala> t.sequence
res0: Foo[List[Int]] = Foo(List(1, 2, 3))

In Scala, what does "extends (A => B)" on a case class mean?

In researching how to do Memoization in Scala, I've found some code I didn't grok. I've tried to look this particular "thing" up, but don't know by what to call it; i.e. the term by which to refer to it. Additionally, it's not easy searching using a symbol, ugh!
I saw the following code to do memoization in Scala here:
case class Memo[A,B](f: A => B) extends (A => B) {
private val cache = mutable.Map.empty[A, B]
def apply(x: A) = cache getOrElseUpdate (x, f(x))
}
And it's what the case class is extending that is confusing me, the extends (A => B) part. First, what is happening? Secondly, why is it even needed? And finally, what do you call this kind of inheritance; i.e. is there some specific name or term I can use to refer to it?
Next, I am seeing Memo used in this way to calculate a Fibanocci number here:
val fibonacci: Memo[Int, BigInt] = Memo {
case 0 => 0
case 1 => 1
case n => fibonacci(n-1) + fibonacci(n-2)
}
It's probably my not seeing all of the "simplifications" that are being applied. But, I am not able to figure out the end of the val line, = Memo {. So, if this was typed out more verbosely, perhaps I would understand the "leap" being made as to how the Memo is being constructed.
Any assistance on this is greatly appreciated. Thank you.
A => B is short for Function1[A, B], so your Memo extends a function from A to B, most prominently defined through method apply(x: A): B which must be defined.
Because of the "infix" notation, you need to put parentheses around the type, i.e. (A => B). You could also write
case class Memo[A, B](f: A => B) extends Function1[A, B] ...
or
case class Memo[A, B](f: Function1[A, B]) extends Function1[A, B] ...
To complete 0_'s answer, fibonacci is being instanciated through the apply method of Memo's companion object, generated automatically by the compiler since Memo is a case class.
This means that the following code is generated for you:
object Memo {
def apply[A, B](f: A => B): Memo[A, B] = new Memo(f)
}
Scala has special handling for the apply method: its name needs not be typed when calling it. The two following calls are strictly equivalent:
Memo((a: Int) => a * 2)
Memo.apply((a: Int) => a * 2)
The case block is known as pattern matching. Under the hood, it generates a partial function - that is, a function that is defined for some of its input parameters, but not necessarily all of them. I'll not go in the details of partial functions as it's beside the point (this is a memo I wrote to myself on that topic, if you're keen), but what it essentially means here is that the case block is in fact an instance of PartialFunction.
If you follow that link, you'll see that PartialFunction extends Function1 - which is the expected argument of Memo.apply.
So what that bit of code actually means, once desugared (if that's a word), is:
lazy val fibonacci: Memo[Int, BigInt] = Memo.apply(new PartialFunction[Int, BigInt] {
override def apply(v: Int): Int =
if(v == 0) 0
else if(v == 1) 1
else fibonacci(v - 1) + fibonacci(v - 2)
override isDefinedAt(v: Int) = true
})
Note that I've vastly simplified the way the pattern matching is handled, but I thought that starting a discussion about unapply and unapplySeq would be off topic and confusing.
I am the original author of doing memoization this way. You can see some sample usages in that same file. It also works really well when you want to memoize on multiple arguments too because of the way Scala unrolls tuples:
/**
* #return memoized function to calculate C(n,r)
* see http://mathworld.wolfram.com/BinomialCoefficient.html
*/
val c: Memo[(Int, Int), BigInt] = Memo {
case (_, 0) => 1
case (n, r) if r > n/2 => c(n, n-r)
case (n, r) => c(n-1, r-1) + c(n-1, r)
}
// note how I can invoke a memoized function on multiple args too
val x = c(10, 3)
This answer is a synthesis of the partial answers provided by both 0__ and Nicolas Rinaudo.
Summary:
There are many convenient (but also highly intertwined) assumptions being made by the Scala compiler.
Scala treats extends (A => B) as synonymous with extends Function1[A, B] (ScalaDoc for Function1[+T1, -R])
A concrete implementation of Function1's inherited abstract method apply(x: A): B must be provided; def apply(x: A): B = cache.getOrElseUpdate(x, f(x))
Scala assumes an implied match for the code block starting with = Memo {
Scala passes the content between {} started in item 3 as a parameter to the Memo case class constructor
Scala assumes an implied type between {} started in item 3 as PartialFunction[Int, BigInt] and the compiler uses the "match" code block as the override for the PartialFunction method's apply() and then provides an additional override for the PartialFunction's method isDefinedAt().
Details:
The first code block defining the case class Memo can be written more verbosely as such:
case class Memo[A,B](f: A => B) extends Function1[A, B] { //replaced (A => B) with what it's translated to mean by the Scala compiler
private val cache = mutable.Map.empty[A, B]
def apply(x: A): B = cache.getOrElseUpdate(x, f(x)) //concrete implementation of unimplemented method defined in parent class, Function1
}
The second code block defining the val fibanocci can be written more verbosely as such:
lazy val fibonacci: Memo[Int, BigInt] = {
Memo.apply(
new PartialFunction[Int, BigInt] {
override def apply(x: Int): BigInt = {
x match {
case 0 => 0
case 1 => 1
case n => fibonacci(n-1) + fibonacci(n-2)
}
}
override def isDefinedAt(x: Int): Boolean = true
}
)
}
Had to add lazy to the second code block's val in order to deal with a self-referential problem in the line case n => fibonacci(n-1) + fibonacci(n-2).
And finally, an example usage of fibonacci is:
val x:BigInt = fibonacci(20) //returns 6765 (almost instantly)
One more word about this extends (A => B): the extends here is not required, but necessary if the instances of Memo are to be used in higher order functions or situations alike.
Without this extends (A => B), it's totally fine if you use the Memo instance fibonacci in just method calls.
case class Memo[A,B](f: A => B) {
private val cache = scala.collection.mutable.Map.empty[A, B]
def apply(x: A):B = cache getOrElseUpdate (x, f(x))
}
val fibonacci: Memo[Int, BigInt] = Memo {
case 0 => 0
case 1 => 1
case n => fibonacci(n-1) + fibonacci(n-2)
}
For example:
Scala> fibonacci(30)
res1: BigInt = 832040
But when you want to use it in higher order functions, you'd have a type mismatch error.
Scala> Range(1, 10).map(fibonacci)
<console>:11: error: type mismatch;
found : Memo[Int,BigInt]
required: Int => ?
Range(1, 10).map(fibonacci)
^
So the extends here only helps to ID the instance fibonacci to others that it has an apply method and thus can do some jobs.

Why is reference to overloaded definition ambiguous when types are known?

I have a function like so:
def ifSome[B, _](pairs:(Option[B], B => _)*) {
for((paramOption, setFunc) <- pairs)
for(someParam <- paramOption) setFunc(someParam)
}
and overloaded functions like these:
class Foo{
var b=""
def setB(b:String){this.b = b}
def setB(b:Int){this.b = b.toString}
}
val f = new Foo
then the following line produces an error:
ifSome(Option("hi") -> f.setB _)
<console>:11: error: ambiguous reference to overloaded definition,
both method setB in class Foo of type (b: Int)Unit
and method setB in class Foo of type (b: String)Unit
match expected type ?
ifSome(Option("hi") -> f.setB _)
But the compiler knows that we're looking for a Function1[java.lang.String, _], so why should the presence of a Function1[Int, _] present any confusion? Am I missing something or is this a compiler bug (or perhaps it should be a feature request)?
I was able to workaround this by using a type annotation like so
ifSome(Option("hi") -> (f.setB _:String=>Unit))
but I'd like to understand why this is necessary.
You'll want to try $ scalac -Ydebug -Yinfer-debug x.scala but first you'll want to minimize.
In this case, you'll see how in the curried version, B is solved in the first param list:
[infer method] solving for B in (bs: B*)(bfs: Function1[B, _]*)Nothing
based on (String)(bfs: Function1[B, _]*)Nothing (solved: B=String)
For the uncurried version, you'll see some strangeness around
[infer view] <empty> with pt=String => Int
as it tries to disambiguate the overload, which may lead you to the weird solution below.
The dummy implicit serves the sole purpose of resolving the overload so that inference can get on with it. The implicit itself is unused and remains unimplemented???
That's a pretty weird solution, but you know that overloading is evil, right? And you've got to fight evil with whatever tools are at your disposal.
Also see that your type annotation workaround is more laborious than just specifying the type param in the normal way.
object Test extends App {
def f[B](pairs: (B, B => _)*) = ???
def f2[B](bs: B*)(bfs: (B => _)*) = ???
def g(b: String) = ???
def g(b: Int) = ???
// explicitly
f[String](Pair("hi", g _))
// solves for B in first ps
f2("hi")(g _)
// using Pair instead of arrow means less debug output
//f(Pair("hi", g _))
locally {
// unused, but selects g(String) and solves B=String
import language.implicitConversions
implicit def cnv1(v: String): Int = ???
f(Pair("hi", g _))
}
// a more heavy-handed way to fix the type
class P[A](a: A, fnc: A => _)
class PS(a: String, fnc: String => _) extends P[String](a, fnc)
def p[A](ps: P[A]*) = ???
p(new PS("hi", g _))
}
Type inference in Scala only works from one parameter list to the next. Since your ifSome only has one parameter list, Scala won't infer anything. You can change ifSome as follows:
def ifSome[B, _](opts:Option[B]*)(funs: (B => _)*) {
val pairs = opts.zip(funs)
for((paramOption, setFunc) <- pairs)
for(someParam <- paramOption) setFunc(someParam)
}
leave Foo as it is...
class Foo{
var b=""
def setB(b:String){this.b = b}
def setB(b:Int){this.b = b.toString}
}
val f = new Foo
And change the call to ifSome accordingly:
ifSome(Option("hi"))(f.setB _)
And it all works. Now of course you have to check whether opts and funs have the same length at runtime.

Scala: implicit conversion to generate method values?

If I have the following class in Scala:
class Simple {
def doit(a: String): Int = 42
}
And an instance of that class
o = new Simple()
Is possible to define an implicit conversion that will morph this instance and a method known at compile into a tuple like this?
Tuple2 (o, (_: Simple).doit _)
I was hoping I could come up with one for registering function callbacks in the spirit of:
doThisLater (o -> 'doit)
I functionally have my function callbacks working based on retronym's answer to a previous SO question, but it'd be great to add this thick layer of syntactic sugar.
You can just eta-expand the method. Sample REPL session,
scala> case class Deferred[T, R](f : T => R)
defined class Deferred
scala> def doThisLater[T, R](f : T => R) = Deferred(f)
doThisLater: [T, R](f: T => R)Deferred[T,R]
scala> val deferred = doThisLater(o.doit _) // eta-convert doit
deferred: Deferred[String,Int] = Deferred(<function1>)
scala> deferred.f("foo")
res0: Int = 42