Scala case class prohibits call-by-name parameters? - scala

Scenario:
I want to implement an infinite list:
abstract class MyList[+T]
case object MyNil extends MyList[Nothing]
case class MyNode[T](h:T,t: => MyList[T]) extends MyList[T]
//error: `val' parameters may not be call-by-name
Problem:
The error is that call-by-name is not allowed.
I've heard that it is because val or var constructor parameter is not allowed for call-by-name. For example:
class A(val x: =>Int)
//error: `val' parameters may not be call-by-name
But in contrast the normal constructor parameter is still val, despite private. For example:
class A(x: =>Int)
// pass
So the Question :
Is the problem really about val or var ?
If that, since the point for call-by-name is to defer computation. Why could not val or var computation(or initialization) be deferred?
How to get around the case class limitation to implement an infinite list?

There is no contradiction: class A(x: => Int) is equivalent to class A(private[this] val x: => Int) and not class A(private val x: => Int). private[this] marks a value instance-private, while a private-modifier without further specification allows accessing the value from any instance of that class.
Unfortunately, defining a case class A(private[this] val x: => Int) is not allowed either. I assume it is because case-classes need access to the constructor values of other instances, because they implement the equals method.
Nevertheless, you could implement the features that a case class would provide manually:
abstract class MyList[+T]
class MyNode[T](val h: T, t: => MyList[T]) extends MyList[T]{
def getT = t // we need to be able to access t
/* EDIT: Actually, this will also lead to an infinite recursion
override def equals(other: Any): Boolean = other match{
case MyNode(i, y) if (getT == y) && (h == i) => true
case _ => false
}*/
override def hashCode = h.hashCode
override def toString = "MyNode[" + h + "]"
}
object MyNode {
def apply[T](h: T, t: => MyList[T]) = new MyNode(h, t)
def unapply[T](n: MyNode[T]) = Some(n.h -> n.getT)
}
To check this code, you could try:
def main(args: Array[String]): Unit = {
lazy val first: MyNode[String] = MyNode("hello", second)
lazy val second: MyNode[String] = MyNode("world", first)
println(first)
println(second)
first match {
case MyNode("hello", s) => println("the second node is " + s)
case _ => println("false")
}
}
Unfortunately, I do not know for sure why call-by-name val and var members are prohibited. However, there is at least one danger to it: Think about how case-classes implement toString; The toString-method of every constructor value is called. This could (and in this example would) lead to the values calling themselves infinitely. You can check this by adding t.toString to MyNode's toString-method.
Edit: After reading Chris Martin's comment: The implementation of equals will also pose a problem that is probably more severe than the implementation of toString (which is mostly used for debugging) and hashCode (which will only lead to higher collision rates if you can't take the parameter into account). You have to think carefully about how you would implement equals to be meaningfull.

I have also not found why exactly by-name parameters are prohibited in case classes. I guess explanation should be quite elaborate and complex.
But Runar Bjarnason in his book "Functional Programming in Scala" provides a good approach to handle this obstacle. He uses the concept of a "thunk" together with memoizing.
Here is an example of Stream implementation:
sealed trait Stream[+A]
case object Empty extends Stream[Nothing]
case class Cons[+A](h: () => A, t: () => Stream[A]) extends Stream[A]
object Stream {
def cons[A](hd: => A, tl: => Stream[A]): Stream[A] = {
lazy val head = hd
lazy val tail = tl
Cons(() => head, () => tail)
}
def empty[A]: Stream[A] = Empty
def apply[A](as: A*): Stream[A] =
if (as.isEmpty) empty else cons(as.head, apply(as.tail: _*))
}
}
As you see, instead of a regular by-name parameter for the case class data constructor they use what they call a "thunk", a function of zero-arguments () => T. Then to make this transparent for the user they declare a smart constructor in the companion object which allows you to provide a by-name parameters and make them memoized.

This is actually similar approach to the Stream solution but simplified to what is actually required:
case class A(x: () => Int) {
lazy val xx = x()
}
So you can use your case class as:
def heavyOperation: Int = ???
val myA = A(heavyOperation)
val myOtherA = A(() => 10)
val useA = myA.xx + myOtherA.xx
Like this the actual heavy operation will be performed only when you use xx, i.e., only on the last line.

I like using an implicit function to make a thunk work like a call by name.
e.g. in this example:
case class Timed[R](protected val block: () => R) {
override def toString() = s"Elapsed time: $elapsedTime"
val t0 = System.nanoTime()
val result = block() // execute thunk
val t1 = System.nanoTime()
val elapsedTime = t1 - t0
}
implicit def blockToThunk[R](bl: => R) = () => bl //helps to call Timed without the thunk syntax
this let's you call Timed({Thread.sleep(1000); println("hello")}) for example with call by name syntax

Related

How to chain generically chain functions returning Either with an operator like `andThen`?

Problem: Chaining multiple Either returning functions, the Left of which are all failures inheriting from a common sealed trait InternalError. However, the compiler complains that the chain is returning Either[_,Success] instead of Either[InternalError, Success].
Here's the code that does the chaining:
import scala.language.implicitConversions
object EitherExtension {
implicit class AndThenEither[A,B](val e: Function1[A,Either[_,B]]) {
//get ability to chain/compose functions that return aligning Eithers
def andThenE[C](f:Function1[B, Either[_,C]]): Function1[A, Either[_,C]] = {
(v1: A) => e.apply(v1).flatMap(b => f.apply(b))
}
}
}
As was pointed out in the comments this discards the type of Left. If I change it the below it will not work since the final output can be of type Either[X|Y, C] which resolves to Either[_,C] and I'm back to square one.
implicit class AndThenEither[A,B,X](val e: (A) => Either[X, B]) {
def andThenE[C,Y](f:(B) => Either[Y, C]): (A) => Either[_, C] = {
(v1: A) => e.apply(v1).flatMap(b => f.apply(b))
}
}
Here's the example showing the compositional failure of type alignment:
import EitherExtension._
object AndThenComposition {
//sample type definitions of failure responses
sealed trait InternalError
case class Failure1() extends InternalError
case class Failure2() extends InternalError
//sample type definitions
case class Id(id: Int)
case class Stuff()
//sample type definitions of successful responses
case class Output1()
case class Output2()
case class InputRequest()
val function1: (InputRequest) => Either[Failure1, Output1] = ???
val function2: (Output1) => Either[Failure2, Output2] = ???
def doSomething(s:Id, l:List[Stuff]): Either[InternalError, Output2] = {
val pipeline = function1 andThenE function2
pipeline(InputRequest()) //Why is this of type Either[_, Output2]
}
}
What am I missing? How can I get the return type to not be Either[Any, Output2] but rather the base/sealed trait? Is this possible to do generically?
You need to preserve the type of the left so we will modify the extension method to do that.
Note that, since both eithers can have different left types, what we will do is use a type bound to ask the compiler to infer the LUB between those types; thanks to Any this is always possibles (although not always helpful).
object EitherExtension {
implicit class AndThenEither[I, L1, R1](private val f: I => Either[L1, R1]) extends AnyVal {
def andThenE[L2 >: L1, R2](g: R1 => Either[L2, R2]): I => Either[L2, R2] =
i => f(i).flatMap(g)
}
}
Which can be used like this:
import EitherExtension._
object AndThenComposition {
sealed trait InternalError
final case object Failure1 extends InternalError
final case object Failure2 extends InternalError
val function1: Int => Either[Failure1.type, String] = ???
val function2: String => Either[Failure2.type, Boolean] = ???
def doSomething(input: Int): Either[InternalError, Boolean] = {
(function1 andThenE function2)(input)
}
}
See the code running here.
In case you're using this in production, and it's not just a learning thing, what you're looking for it's called Kleisli, and fortunately cats-core already implements it.
According to the cats-core docs:
Kleisli enables composition of functions that return a monadic value,
for instance an Option[Int] or a Either[String, List[Double]], without
having functions take an Option or Either as a parameter, which can be
strange and unwieldy.
Since Kleisli composes two functions with the signature A => F[B], you'd need only one abstraction to be able to use Kleisli, which is creating a new type for your operation:
type Operation[A] = Either[InternalFailure, A]
By doing this, you should be able to use Kleisli like this:
import cats.data.Kleisli
val first: Kleisli[Operation, InputRequest, Output1] = Kleisli { request: InputRequest =>
Left(Failure1())
}
val second: Kleisli[Operation, Output1, Output2] = Kleisli { output: Output1 =>
Right(Output2())
}
val composed = first.andThen(second)

Scala Dynamics: Ability to add dynamic methods?

I know I can add dynamic "fields" like this:
import collection.mutable
class DynamicType extends Dynamic {
private val fields = mutable.Map.empty[String, Any].withDefault {key => throw new NoSuchFieldError(key)}
def selectDynamic(key: String) = fields(key)
def updateDynamic(key: String)(value: Any) = fields(key) = value
def applyDynamic(key: String)(args: Any*) = fields(key)
}
I can then do stuff like this:
val foo = new DynamicType
foo.age = 23
foo.name = "Rick"
But, I want to extend this one step farther and add dynamic methods e.g:
foo.greet = (name: String) => s"Nice to meet you $name, my name is ${this.name}"
foo.greet("Nat"); //should return "Nice to meet you Nat, my name is Rick"
I tried storing all methods in separate map in updateDynamic but I could not figure out a generic way to handle the arity problem. So is there a way to use Macros + Dynamics to have something like this?
EDIT:
Based on #Petr Pudlak's answer, I tried implementing something like this:
import collection.mutable
import DynamicType._
/**
* An useful dynamic type that let's you add/delete fields and methods during runtime to a structure
*/
class DynamicType extends Dynamic {
private val fields = mutable.Map.empty[String, Any] withDefault { key => throw new NoSuchFieldError(key) }
private val methods = mutable.Map.empty[String, GenFn] withDefault { key => throw new NoSuchMethodError(key) }
def selectDynamic(key: String) = fields(key)
def updateDynamic(key: String)(value: Any) = value match {
case fn0: Function0[Any] => methods(key) = {case Seq() => fn0()}
case fn1: Function1[Any, Any] => methods(key) = fn1
case fn2: Function2[Any, Any, Any] => methods(key) = fn2
case _ => fields(key) = value
}
def applyDynamic(key: String)(args: Any*) = methods(key)(args)
/**
* Deletes a field (methods are fields too)
* #return the old field value
*/
def delete(key: String) = fields.remove(key)
//todo: export/print to json
}
object DynamicType {
import reflect.ClassTag
type GenFn = PartialFunction[Seq[Any],Any]
implicit def toGenFn1[A: ClassTag](f: (A) => Any): GenFn = { case Seq(a: A) => f(a) }
implicit def toGenFn2[A: ClassTag, B: ClassTag](f: (A, B) => Any): GenFn = { case Seq(a: A, b: B) => f(a, b) }
// todo: generalize to 22-args
}
Full code here
1) It correctly handles fields vs methods (even 0-args) but is quite verbose (currently works upto 2 arg methods only). Is there anyway to simplify my code?
2) Is there anyway to support dynamic method overloading (e.g. adding 2 dynamic methods with different signatures?) If I can get the signature of the function, I can use that as a key in my methods map.
In order to do that, we have to solve two problems:
Unify somehow all possible functions into one data type.
Deal with the fact that the dynamic select can return both values and functions and we can't determine it in advance.
Here is one possibility:
First, let's define the most generic function type: Take any number of any arguments and produce a result, or fail, if the number or types of the arguments dont' match:
type GenFn = PartialFunction[Seq[Any],Any]
Now we create a dynamic type where everything is GenFn:
class DynamicType extends Dynamic {
import collection.mutable
private val fields =
mutable.Map.empty[String,GenFn]
.withDefault{ key => throw new NoSuchFieldError(key) }
def selectDynamic(key: String) = fields(key)
def updateDynamic(key: String)(value: GenFn) = fields(key) = value
def applyDynamic(key: String)(args: Any*) = fields(key)(args);
}
Next, let's create implicit conversions that convert functions of different arities into this type:
import scala.reflect.ClassTag
implicit def toGenFn0(f: => Any): GenFn =
{ case Seq() => f; }
implicit def toGenFn1[A: ClassTag](f: (A) => Any): GenFn =
{ case Seq(x1: A) => f(x1); }
implicit def toGenFn2[A: ClassTag,B: ClassTag](f: (A,B) => Any): GenFn =
{ case Seq(x1: A, x2: B) => f(x1, x2); }
// ... other arities ...
Each conversion converts a function (or a value) to a GenFn - a partial function, which fails if it's given a wrong number/types of arguments.
We use ClassTag in order to be able to match the correct types of arguments. Note that we treat values as functions of zero arity. This way we deal with 2. at the cost of using retrieving values by giving zero arguments as in name().
Finally, we can do something like:
val foo = new DynamicType
foo.name = "Rick"
foo.greet = (name: String) =>
s"Nice to meet you $name, my name is ${foo.name()}"
println(foo.greet("Nat"));
In order to support method overloading, all we need is to chain PartialFunctions. This canbe accomplished as
def updateDynamic(key: String)(value: GenFn) =
fields.get(key) match {
case None => fields(key) = value
case Some(f) => fields(key) = f.orElse(value);
}
(note that it isn't thread safe). Then we can call something like
val foo = new DynamicType
foo.name = "Rick"
foo.greet = (name: String)
=> s"Nice to meet you $name, my name is ${foo.name()}"
foo.greet = (firstName: String, surname: String)
=> s"Nice to meet you $firstName $surname, my name is ${foo.name()}"
println(foo.greet("Nat"));
println(foo.greet("Nat Smith"));
Note that this solution works a bit differently than the standard method overloading. Here it depends on the order in which functions are added. If a more general function is added first, the more specific one will never be invoked. So always add more specific functions first.
It will be probably more difficult the way you've done it, because it seems you don't distinguish the types of functions (like my toGenFn... methods do), so if a function gets wrong arguments, it'll just throw an exception instead of passing them to the next in line. But it should work with functions with different number of arguments.
I don't think it's possible to avoid the verbosity caused by examining functions of various arguments, but I don't think it really matters. This is just a one-time work, the clients of DynamicType aren't affected by it.

datastructure that holds closures, parametrically in scala

I am implementing a GUI event system in Scala. I have something like:
case class EventObject
case class KeyEventObject extends EventObject
case class MouseEventObject extends EventObject
I would like to store event listener closures in a (multi-)map, like so:
var eventListeners = new MultiMap[EventDescriptor, (EventObject => Unit)];
My question is, is there some way to rewrite this so that the function signatures of the stored closure can be EventObject or any subclass? Something like the following:
var eventListeners = new MultiMap[EventDescriptor, [A <: EventObject](A => Unit)]
so that I can have the subtype known when I define the listener functions:
eventListeners.put(KEY_EVENT, (e:KeyEventObject) => { ... })
eventListeners.put(MOUSE_EVENT, (e:MouseEventObject) => { ... })
Not that many things are impossible. You can do the following, for example, with type classes:
class HMultiMap {
import scala.collection.mutable.{ Buffer, HashMap }
type Mapping[K, V]
private[this] val underlying = new HashMap[Any, Buffer[Any]]
def apply[K, V](key: K)(implicit ev: Mapping[K, V]) =
underlying.getOrElse(key, Buffer.empty).toList.asInstanceOf[List[V]]
def add[K, V](key: K)(v: V)(implicit ev: Mapping[K, V]) = {
underlying.getOrElseUpdate(key, Buffer.empty) += v
this
}
}
And now:
sealed trait EventObject
case class KeyEventObject(c: Char) extends EventObject
case class MouseEventObject(x: Int, y: Int) extends EventObject
sealed trait EventDescriptor
case object KEY_EVENT extends EventDescriptor
case object MOUSE_EVENT extends EventDescriptor
class EventMap extends HMultiMap {
class Mapping[K, V]
object Mapping {
implicit object k extends Mapping[KEY_EVENT.type, KeyEventObject => Unit]
implicit object m extends Mapping[MOUSE_EVENT.type, MouseEventObject => Unit]
}
}
It's a little messy, but the usage is much prettier:
val eventListeners = new EventMap
eventListeners.add(KEY_EVENT)((e: KeyEventObject) => println(e.c))
eventListeners.add(MOUSE_EVENT)((e: MouseEventObject) => println("X: " + e.x))
eventListeners.add(KEY_EVENT)((e: KeyEventObject) => println(e.c + " again"))
We can confirm that we can pick out individual kinds of event handlers:
scala> eventListeners(KEY_EVENT).size
res3: Int = 2
And we can pretend to fire an event and run all the handlers for it:
scala> eventListeners(KEY_EVENT).foreach(_(KeyEventObject('a')))
a
a again
And it's all perfectly safe, since nothing gets into the underlying loosely-typed map without the proper evidence. We'd get a compile-time error if we tried to add a function from String to Unit, for example.
EDIT
Seems impossible. Also a case-to-case inheritance gets prohibited in Scala-2.10
...
Maybe by using some specific trait and declaring event object classes sealed and extend that trait?
val eventListeners = new MultiMap[EventDescriptor, ((_ >: EventTrait) => Unit)]
From List sources:
:+[B >: A, That](elem: B)(implicit bf: CanBuildFrom[List[A], B, That]): That
That - is a specific type, it could be built from your EventTrait by implicit builder, one builder for each event type. Instead of elem: B try use classOf[B]. Create get methods that will access your MultiMap using different classOf:
def getMouseEvent(ed: EventDescriptor) = multimap.entrySet.filter(
(a, b) => a == ed ).map((a, b) => (a, convertTo(ClassOf[MouseEvent], b)).
filter((a, b) => b != null)
convertTo returns null if it could not convert event to appropriate type
Ugly.
It's impossible.
Let's suppose you have such Map:
val f = eventListeners(key).head
How would you call the function f? With parameter of type EventObject? You can't. It could be KeyEventObject => Unit. With parameter of type KeyEventObject? You can't. It could be MouseEventObject => Unit.
You can use PartialFunction[EventObject, Unit] and check for isDefinedAt every time, but it's an ugly way.

Elegant way to sort Array[B] for a subclass B < A, when A extends Ordered[A]?

Having defined a class A which extends Ordering[A], and a subclass B of A, how do I automatically sort an Array of Bs? The Scala compiler complains that it "could not find implicit value for parameter ord: Ordering[B]". Here's a concrete REPL example (Scala 2.8), with A = Score and B = CommentedScore:
class Score(val value: Double) extends Ordered[Score] {
def compare(that: Score) = value.compare(that.value)
}
defined class Score
trait Comment { def comment: String }
defined trait Comment
class CommentedScore(value: Double, val comment: String) extends Score(value) with Comment
defined class CommentedScore
val s = new CommentedScore(10,"great")
s: CommentedScore = CommentedScore#842f23
val t = new CommentedScore(0,"mediocre")
t: CommentedScore = CommentedScore#dc2bbe
val commentedScores = Array(s,t)
commentedScores: Array[CommentedScore] = Array(CommentedScore#b3f01d, CommentedScore#4f3c89)
util.Sorting.quickSort(commentedScores)
error: could not find implicit value for parameter ord: Ordering[CommentedScore]
util.Sorting.quickSort(commentedScores)
^
How do I fix this (that is, sort an Array[B] = Array[CommentedScore] "for free", given that I know how to sort Array[A] = Array[Score]), in an elegant manner which avoids boilerplate?
Thanks!
Add the required implicit yourself:
implicit val csOrd: Ordering[CommentedScore] = Ordering.by(_.value)
You can put this in a CommentedScore companion object so that there is no boilerplate at use-site.
Edit: if you want the ordering method to be defined only at the top of the inheritance tree, you still have to provide an Ordering for each subclass, but you can define the compare method of the Ordering in terms of the one in the Score object. i.e.
object Score {
implicit val ord: Ordering[Score] = Ordering.by(_.value)
}
object CommentedScore {
implicit val csOrd = new Ordering[CommentedScore] {
def compare(x: CommentedScore, y: CommentedScore) = Score.ord.compare(x, y)
}
}
if you don't want to re-define this for each sub-class, you can use a generic method to produce the Ordering:
object Score {
implicit def ord[T <: Score]: Ordering[T] = Ordering.by(_.value)
}
This is a bit less efficient since being a def rather than a val, it creates a new Ordering each time one is required. However the overhead is probably tiny. Also note, the Ordered trait and compare method is not necessary now we have Orderings.
You might use Order from scalaz, which is contravariant, so you need not to define it for every subclass. Here is an example:
import scalaz._
import Scalaz._
class Score(val value: Double)
object Score {
implicit val scoreOrd: Order[Score] = orderBy(_.value)
}
trait Comment { def comment: String }
class CommentedScore(value: Double, val comment: String) extends Score(value) with Comment {
override def toString = s"cs($value, $comment)"
}
def quickSort[E: Order](list: List[E]): List[E] = list match {
case Nil => Nil
case head :: tail =>
val (less, more) = tail partition { e => implicitly[Order[E]].order(e, head) == LT }
quickSort(less) ::: head :: quickSort(more)
}
println(quickSort(List(
new CommentedScore(10,"great"),
new CommentedScore(5,"ok"),
new CommentedScore(8,"nice"),
new CommentedScore(0,"mediocre")
))) // List(cs(0.0, mediocre), cs(5.0, ok), cs(8.0, nice), cs(10.0, great))
This works:
val scoreArray: Array[Score] = Array(s, t)
util.Sorting.quickSort(scoreArray)
Or if you are starting from the Array[CommentedScore]:
val scoreArray: Array[Score] = commentedScores.map(identity)
util.Sorting.quickSort(scoreArray)
Note you can sort more simply with:
scoreArray.sorted

Overriding curried functions in Scala

I was under the impression that this
// short syntax
def foo(bar: Bar)(baz: Baz): Quux
was syntax sugar for this
// long syntax
def foo(bar: Bar): (Baz) => Quux
But I cannot seem to mix the two when it comes to inheritance. The whole tree has to be defined in either the short syntax or the long syntax; never both.
For example:
case class Context
case class Work
trait ContextualWorker {
def workWithContext(ctxt: Context)(work: Work): Traversable[Work]
}
class ShortConcreteWorker extends ContextualWorker {
override def workWithContext(ctxt: Context)(work: Work) = Nil
}
class LongConcreteWorker extends ContextualWorker {
// error on next line: method workWithContext overrides nothing <-------------
override def workWithContext(ctxt: Context): (Work) => Traversable[Work] = {
val setupCode = 1
{ work => Nil }
}
}
If I change the trait to use the long syntax, then ShortConcreteWorker doesn't compile.
Is there a reason why these aren't interchangeable/inheritable? How have you gotten around it?
Right now the most flexible approach appears to be to define the tree in the long syntax, perhaps delegating to an implementation class in ShortConcreteWorker like so:
case class Context
case class Work
trait ContextualWorker {
def workWithContext(ctxt: Context): (Work) => Traversable[Work]
}
class ShortConcreteWorker extends ContextualWorker {
override def workWithContext(ctxt: Context) = workWithContextImpl(ctxt)_
private def workWithContextImpl(ctxt: Context)(work: Work) = Nil
}
class LongConcreteWorker extends ContextualWorker {
override def workWithContext(ctxt: Context): (Work) => Traversable[Work] = {
val setupCode = 1
{ work => Nil }
}
}
The two methods described quite simply have different signatures. The REPL confirms this:
scala> def foo1(a: Int)(b: Int): Int = a + b
foo1: (a: Int)(b: Int)Int
scala> def foo2(a: Int): (Int => Int) = (b: Int) => a + b
foo2: (a: Int)Int => Int
The first is a function that requires two arguments, given in separate argument lists, and returns an Int. The second is a function that takes one argument and returns a function from Int to Int. While these two things are conceptually similar, they are, in fact, different constructs, and Scala treats them as such.
This is not limited to functions with multiple argument lists. It works the same way here:
scala> def foo3(a: Int): Int = a + 1
foo3: (a: Int)Int
scala> def foo4: (Int => Int) = (a: Int) => a + 1
foo4: Int => Int
Note that there are different ramifications for usage as well. With foo2, because it only accepts one argument, we can call it with just one argument. However, foo1 requires two arguments, an so we cannot simply call it with one. You can however use the _ syntax to convert it into a callable function.
foo2(2) // Int => Int = <function1>
foo1(2) // error: missing arguments for method foo1
foo1(2) _ // Int => Int = <function1>
So to answer your question directly: The reason they are not interchangeable is because they are not the same. If they were the same, we would be able to call them the same way. If you could change the signature upon extension, how would Scala know which calling syntax to allow? The way to "get around" this is to simply make the signatures consistent.