costly computation occuring in both isDefined and Apply of a PartialFunction - scala

It is quite possible that to know whether a function is defined at some point, a significant part of computing its value has to be done. In a PartialFunction, when implementing isDefined and apply, both methods will have to do that. What to do is this common job is costly?
There is the possibility of caching its result, hoping that apply will be called after isDefined. Definitely ugly.
I often wish that PartialFunction[A,B] would be Function[A, Option[B]], which is clearly isomorphic. Or maybe, there could be another method in PartialFunction, say applyOption(a: A): Option[B]. With some mixins, implementors would have a choice of implementing either isDefined and apply or applyOption. Or all of them to be on the safe side, performance wise. Clients which test isDefined just before calling apply would be encouraged to use applyOption instead.
However, this is not so. Some major methods in the library, among them collect in collections require a PartialFunction. Is there a clean (or not so clean) way to avoid paying for computations repeated between isDefined and apply?
Also, is the applyOption(a: A): Option[B] method reasonable? Does it sound feasible to add it in a future version? Would it be worth it?

Why is caching such a problem? In most cases, you have a local computation, so as long as you write a wrapper for the caching, you needn't worry about it. I have the following code in my utility library:
class DroppedFunction[-A,+B](f: A => Option[B]) extends PartialFunction[A,B] {
private[this] var tested = false
private[this] var arg: A = _
private[this] var ans: Option[B] = None
private[this] def cache(a: A) {
if (!tested || a != arg) {
tested = true
arg = a
ans = f(a)
}
}
def isDefinedAt(a: A) = {
cache(a)
ans.isDefined
}
def apply(a: A) = {
cache(a)
ans.get
}
}
class DroppableFunction[A,B](f: A => Option[B]) {
def drop = new DroppedFunction(f)
}
implicit def function_is_droppable[A,B](f: A => Option[B]) = new DroppableFunction(f)
and then if I have an expensive computation, I write a function method A => Option[B] and do something like (f _).drop to use it in collect or whatnot. (If you wanted to do it inline, you could create a method that takes A=>Option[B] and returns a partial function.)
(The opposite transformation--from PartialFunction to A => Option[B]--is called lifting, hence the "drop"; "unlift" is, I think, a more widely used term for the opposite operation.)

Have a look at this thread, Rethinking PartialFunction. You're not the only one wondering about this.

This is an interesting question, and I'll give my 2 cents.
First of resist the urge for premature optimization. Make sure the partial function is the problem. I was amazed at how fast they are on some cases.
Now assuming there is a problem, where would it come from?
Could be a large number of case clauses
Complex pattern matching
Some complex computation on the if causes
One option I'd try to find ways to fail fast. Break the pattern matching into layer, then chain partial functions. This way you can fail the match early. Also extract repeated sub matching. For example:
Lets assume OddEvenList is an extractor that break a list into a odd list and an even list:
var pf1: PartialFuntion[List[Int],R] = {
case OddEvenList(1::ors, 2::ers) =>
case OddEvenList(3::ors, 4::ors) =>
}
Break to two part, one that matches the split then one that tries to match re result (to avoid repeated computation. However this may require some re-engineering
var pf2: PartialFunction[(List[Int],List[Int],R) = {
case (1 :: ors, 2 :: ers) => R1
case (3 :: ors, 4 :: ors) => R2
}
var pf1: PartialFuntion[List[Int],R] = {
case OddEvenList(ors, ers) if(pf2.isDefinedAt(ors,ers) => pf2(ors,ers)
}
I have used this when progressively reading XML files that hard a rather inconstant format.
Another option is to compose partial functions using andThen. Although a quick test here seamed to indicate that only the first was is actually tests.

There is absolutely nothing wrong with caching mechanism inside the partial function, if:
the function returns always the same input, when passed the same argument
it has no side effects
it is completely hidden from the rest of the world
Such cached function is not distiguishable from a plain old pure partial function...

Related

Is there a universal method to create a tail recursive function in Scala?

While checking Intel's BigDL repo, I stumbled upon this method:
private def recursiveListFiles(f: java.io.File, r: Regex): Array[File] = {
val these = f.listFiles()
val good = these.filter(f => r.findFirstIn(f.getName).isDefined)
good ++ these.filter(_.isDirectory).flatMap(recursiveListFiles(_, r))
}
I noticed that it was not tail recursive and decided to write a tail recursive version:
private def recursiveListFiles(f: File, r: Regex): Array[File] = {
#scala.annotation.tailrec def recursiveListFiles0(f: Array[File], r: Regex, a: Array[File]): Array[File] = {
f match {
case Array() => a
case htail => {
val these = htail.head.listFiles()
val good = these.filter(f => r.findFirstIn(f.getName).isDefined)
recursiveListFiles0(these.filter(_.isDirectory)++htail.tail, r, a ++ good)
}
}
}
recursiveListFiles0(Array[File](f), r, Array.empty[File])
}
What made this difficult compared to what I am used to is the concept that a File can be transformed into an Array[File] which adds another level of depth.
What is the theory behind recursion on datatypes that have the following member?
def listTs[T]: T => Traversable[T]
Short answer
If you generalize the idea and think of it as a monad (polymorphic thing working for arbitrary type params) then you won't be able to implement a tail recursive implementation.
Trampolines try to solve this very problem by providing a way to evaluate a recursive computation without overflowing the stack. The general idea is to create a stream of pairs of (result, computation). So at each step you'll have to return the computed result up to that point and a function to create the next result (aka thunk).
From Rich Dougherty’s blog:
A trampoline is a loop that repeatedly runs functions. Each function,
called a thunk, returns the next function for the loop to run. The
trampoline never runs more than one thunk at a time, so if you break
up your program into small enough thunks and bounce each one off the
trampoline, then you can be sure the stack won't grow too big.
More + References
In the categorical sense, the theory behind such data types is closely related to Cofree Monads and fold and unfold functions, and in general to Fixed point types.
See this fantastic talk: Fun and Games with Fix Cofree and Doobie by Rob Norris which discusses a use case very similar to your question.
This article about Free monads and Trampolines is also related to your first question: Stackless Scala With Free Monads.
See also this part of the Matryoshka docs. Matryoshka is a Scala library implementing monads around the concept of FixedPoint types.

'tee' operation on Scala's option type?

Is there some sort of 'tee' operation on Option in Scala's standard library available? The best I could find is foreach, however its return type is Unit, therefore it cannot be chained.
This is what I am looking for: given an Option instance, perform some operation with side effects on its value if the option is not empty (Some[A]), otherwise do nothing; return the option in any case.
I have a custom implementation using an implicit class, but I am wondering whether there is a more common way to do this without implicit conversion:
object OptionExtensions {
implicit class TeeableOption[A](value: Option[A]) {
def tee(action: A => Unit): Option[A] = {
value foreach action
value
}
}
}
Example code:
import OptionExtensions._
val option: Option[Int] = Some(42)
option.tee(println).foreach(println) // will print 42 twice
val another: Option[Int] = None
another.tee(println).foreach(println) // does nothing
Any suggestions?
In order to avoid implicit conversion, instead of using method chaining you can use function composition with k-combinator.
k-combinator gives you an idiomatic way to communicate the fact that you are going to perform a side effect.
Here is a short example:
object KCombinator {
def tap[A](a: A)(action: A => Any): A = {
action(a)
a
}
}
import KCombinator._
val func = ((_: Option[Int]).getOrElse(0))
.andThen(tap(_)(println))
.andThen(_ + 3)
.andThen(tap(_)(println))
If we call our func with an argument of Option(3) the result will be an Int with the value of 6
and this is how the console will look like:
3
6
There is not an existing way to accomplish this in the standard library, because side effects are minimized and isolated in functional programming. Depending on what your actual goals are, there are a couple different ways to idiomatically accomplish your task.
In the case of doing a lot of println commands, instead of sprinkling them throughout your algorithm, you would typically gather them in a collection, then do one foreach println at the end. This minimizes the side effects to the smallest possible impact. That goes with any other side effect. Try to find a way to squeeze it into the smallest possible space.
If you are trying to chain a series of "actions," you should look into futures. Futures basically treat an action as a value, and provide a lot of useful functions to work with them.
Simply use map, and make your side effecting functions conform to action: A => A rather than action: A => Unit.
def tprintln[A](a: A): A = {
println(a)
a
}
another.map(tprintln).foreach(println)

Is there any reason to make API of type Future[Try[A]] instead of Future[A]?

Taking in account that any pattern matching on Future result matches Success[A] and Failure[A] (onComplete(), andThen()) (because they expect Try[A] as an argument, directly or through partial function), could there be a case when I would want to say explicitly that a function is of type Future[Try[A]]?
There are various constructs in Scala which carry failure case in them. There are Option, Either, Try and Future. (Futures main point is to abstract asynchronous operations, an error handling is there for convinience). Scalaz have even more: Validation (Disjunction and Maybe are better Either and Option).
They all have a bit different treatment of erroneous values. Yet Try and Future have very similar, both wrap Throwable. So IMHO, Future[Try[A]] doesn't add much information (about the error). Compare to having Future[Future[A]] or Try[Try[A]]. OTOH Future[Option[A]] or Future[Either[MyError, A]] make sense to me.
There might be sitatuon where you have for example potentially failing f: A => B and g: B => C and you'd like to avoid creating too much tasks in the ExecutionContext:
val a: Future[A] = ???
val c: Future[C] = a.map(f).map(g) // two tasks, not good
val c2: Future[Try[C]] = a.map(x => Try { f(x) } map g ) // one task, maybe better
// get will throw, but exception will be catched by Future's map
// Almost no information is lost compared to `c2`
// You cannot anymore distinguish between possible error in `a`
// and exceptions thrown by `f` or `g`.
val c3: Future[C] = a.map(x => Try { f (x) }.map(g).get)
In this case, I'd rather refactor f and g to have better types, at least: f: A => Option[B] and g: B => Option[C] then, ending up with Future[Option[C]].
Try[A] represents a synchronous computation that may fail.
Future[A] represents an asynchronous computation that may fail.
Under this definitions, Future[Try[A]] represents the result of a synchronous computation (that may fail) executed inside of an asynchronous computation (that may fail).
Does it make sense? Not to me, but I'm open to creative interpretations.

PartialFunction That Isn't Partial

Is there a reason to use a PartialFunction on a function that's not partial?
scala> val foo: PartialFunction[Int, Int] = {
| case x => x * 2
| }
foo: PartialFunction[Int,Int] = <function1>
foo is defined as a PartialFunction, but of course the case x will catch all input.
Is this simply bad code as the PartialFunction type indicates to the programmer that the function is undefined for certain inputs?
There is no advantage in using a PartialFunction instead of a Function, but if you have to pass a PartialFunction, then you have to pass a PartialFunction.
Note that, because of the inheritance between these two, overloading a method to accept both results in something difficult to use, as the type inference won't work.
The thing is, there are many examples of times when what you need to define on a trait/object/function definition is a PartialFunction but in reality the real implementation may not be one. Case in point, take a look at def collect[B](f: PartialFunction[A,B]):
val myList = thatList collect {
case Right(value) => value
case Left(other) => other.toInt
}
It's clearly not a "real" partial as it is defined for all input. That said, if I wanted to, I could just have the Right match.
However, if I were to have written collect as a full on plain function, then I'd miss out on the desired behavior (that is to be both a filter and a map rolled into one base on when a function is defined.) That's nice behavior and allows for a lot of flexibility when writing my own code.
So I guess the better question is, will you ever want behavior to reflect that a function might not be defined everywhere? If the answer is no, then don't do it.
PartialFunction literals allow pattern matching directly on arguments (e.g. { case (a, b) => ... } instead of _ match { case (a, b) => ... }), which makes code more readable (see #wheaties' answer for another example).
EDIT: apparently this is wrong, see Daniel C. Sobral's comment on his answer. Not deleting, so that the comments still make sense.

Is there any fundamental limitations that stops Scala from implementing pattern matching over functions?

In languages like SML, Erlang and in buch of others we may define functions like this:
fun reverse [] = []
| reverse x :: xs = reverse xs # [x];
I know we can write analog in Scala like this (and I know, there are many flaws in the code below):
def reverse[T](lst: List[T]): List[T] = lst match {
case Nil => Nil
case x :: xs => reverse(xs) ++ List(x)
}
But I wonder, if we could write former code in Scala, perhaps with desugaring to the latter.
Is there any fundamental limitations for such syntax being implemented in the future (I mean, really fundamental -- e.g. the way type inference works in scala, or something else, except parser obviously)?
UPD
Here is a snippet of how it could look like:
type T
def reverse(Nil: List[T]) = Nil
def reverse(x :: xs: List[T]): List[T] = reverse(xs) ++ List(x)
It really depends on what you mean by fundamental.
If you are really asking "if there is a technical showstopper that would prevent to implement this feature", then I would say the answer is no. You are talking about desugaring, and you are on the right track here. All there is to do is to basically stitch several separates cases into one single function, and this can be done as a mere preprocessing step (this only requires syntactic knowledge, no need for semantic knowledge). But for this to even make sense, I would define a few rules:
The function signature is mandatory (in Haskell by example, this would be optional, but it is always optional whether you are defining the function at once or in several parts). We could try to arrange to live without the signature and attempt to extract it from the different parts, but lack of type information would quickly come to byte us. A simpler argument is that if we are to try to infer an implicit signature, we might as well do it for all the methods. But the truth is that there are very good reasons to have explicit singatures in scala and I can't imagine to change that.
All the parts must be defined within the same scope. To start with, they must be declared in the same file because each source file is compiled separately, and thus a simple preprocessor would not be enough to implement the feature. Second, we still end up with a single method in the end, so it's only natural to have all the parts in the same scope.
Overloading is not possible for such methods (otherwise we would need to repeat the signature for each part just so the preprocessor knows which part belongs to which overload)
Parts are added (stitched) to the generated match in the order they are declared
So here is how it could look like:
def reverse[T](lst: List[T]): List[T] // Exactly like an abstract def (provides the signature)
// .... some unrelated code here...
def reverse(Nil) = Nil
// .... another bit of unrelated code here...
def reverse(x :: xs ) = reverse(xs) ++ List(x)
Which could be trivially transformed into:
def reverse[T](list: List[T]): List[T] = lst match {
case Nil => Nil
case x :: xs => reverse(xs) ++ List(x)
}
// .... some unrelated code here...
// .... another bit of unrelated code here...
It is easy to see that the above transformation is very mechanical and can be done by just manipulating a source AST (the AST produced by the slightly modified grammar that accepts this new constructs), and transforming it into the target AST (the AST produced by the standard scala grammar).
Then we can compile the result as usual.
So there you go, with a few simple rules we are able to implement a preprocessor that does all the work to implement this new feature.
If by fundamental you are asking "is there anything that would make this feature out of place" then it can be argued that this does not feel very scala. But more to the point, it does not bring that much to the table. Scala author(s) actually tend toward making the language simpler (as in less built-in features, trying to move some built-in features into libraries) and adding a new syntax that is not really more readable goes against the goal of simplification.
In SML, your code snippet is literally just syntactic sugar (a "derived form" in the terminology of the language spec) for
val rec reverse = fn x =>
case x of [] => []
| x::xs = reverse xs # [x]
which is very close to the Scala code you show. So, no there is no "fundamental" reason that Scala couldn't provide the same kind of syntax. The main problem is Scala's need for more type annotations, which makes this shorthand syntax far less attractive in general, and probably not worth the while.
Note also that the specific syntax you suggest would not fly well, because there is no way to distinguish one case-by-case function definition from two overloaded functions syntactically. You probably would need some alternative syntax, similar to SML using "|".
I don't know SML or Erlang, but I know Haskell. It is a language without method overloading. Method overloading combined with such pattern matching could lead to ambiguities. Imagine following code:
def f(x: String) = "String "+x
def f(x: List[_]) = "List "+x
What should it mean? It can mean method overloading, i.e. the method is determined in compile time. It can also mean pattern matching. There would be just a f(x: AnyRef) method that would do the matching.
Scala also has named parameters, which would be probably also broken.
I don't think that Scala is able to offer more simple syntax than you have shown in general. A simpler syntax may IMHO work in some special cases only.
There are at least two problems:
[ and ] are reserved characters because they are used for type arguments. The compiler allows spaces around them, so that would not be an option.
The other problem is that = returns Unit. So the expression after the | would not return any result
The closest I could come up with is this (note that is very specialized towards your example):
// Define a class to hold the values left and right of the | sign
class |[T, S](val left: T, val right: PartialFunction[T, T])
// Create a class that contains the | operator
class OrAssoc[T](left: T) {
def |(right: PartialFunction[T, T]): T | T = new |(left, right)
}
// Add the | to any potential target
implicit def anyToOrAssoc[S](left: S): OrAssoc[S] = new OrAssoc(left)
object fun {
// Use the magic of the update method
def update[T, S](choice: T | S): T => T = { arg =>
if (choice.right.isDefinedAt(arg)) choice.right(arg)
else choice.left
}
}
// Use the above construction to define a new method
val reverse: List[Int] => List[Int] =
fun() = List.empty[Int] | {
case x :: xs => reverse(xs) ++ List(x)
}
// Call the method
reverse(List(3, 2, 1))