What does the #suspendable annotation in Scala do? - scala

While reading the Scala.React implementation on GitHub I've stumbled across the #suspendable annotation:
object Reactor {
def loop[A](op: FlowOps => Unit #suspendable): Reactor = new Reactor {
def body = while (!isDisposed) op(this)
}
}
In the paper Deprecating the Observer Pattern with Scala.React, the Reactor object is used in the following way:
Reactor.loop { self =>
// step 1
val path = new Path((self await mouseDown).position)
self.loopUntil(mouseUp) { // step 2
path.lineTo(m.position)
draw(path)
}
path.close() // step 3
draw(path)
}
Notably, the code in the Reactor body can wait for Events, like mouseDown. Therefore it looks like the code is executed asynchronously, even though there is no explicit use of threads. Because I couldn't find what the #suspandable annotation does, I feel like I'd need to understand it, before I can understand the rest of the implementation. Therefore:
What does the #suspendable annotation do?
Are there scenarios, where it is required?
When would I use it?
My suspicion is, that it somehow abstracts over, and allows for asynchronous execution. If that is true:
How does it work "under the hood" / how is it implemented?

#suspendable is an annotation introduced by the now-abandoned Scala Continuations library (which for a while was part of the Scala standard library) and used by the associated compiler plugin.
The project explored adding delimited continuations to Scala, and about the best extant documentation for it is in the doc-comments here. #suspendable is an alias for #cps[Unit], which basically signals the compiler plugin to perform a CPS transform at shifts when it's called within a reset block.
A rough idea of what the plugin does is:
def five(): Int #cps[Int] = shift { k: (Int => Int) => k(5) }
reset { five() + 1 }
ultimately gets translated to something as simple as
val kont: Int => Int = _ + 1
five(kont)
The basic idea is to translate the remainder of the expression into a function which takes the result up to that point as an argument and pass it into a function which will calculate the result up to that point and call the function representing the remainder of the expression. This is basically what you do manually when using a callback-based API (e.g. in Node.js).
The Future APIs (as well as the async/await compiler plugins for working with Futures) simplified and the use of CPS for asynchronous programming (which is, outside of writing compilers, the only wide non-academic use of CPS) as well as adding more intuitive support for concurrency and asynchrony, which led to the deprecation and removal of continuation support.

This is part of the deprecated scala-continuations library.
You can find documentation for it at this URL:
https://www.scala-lang.org/files/archive/api/2.11.12/scala-continuations-library/#scala.util.continuations.package

Related

Is a while loop already implemented with pass-by-name parameters? : Scala

The Scala Tour Of Scala docs explain pass-by-name parameters using a whileLoop function as an example.
def whileLoop(condition: => Boolean)(body: => Unit): Unit =
if (condition) {
body
whileLoop(condition)(body)
}
var i = 2
whileLoop (i > 0) {
println(i)
i -= 1
} // prints 2 1
The section explains that if the condition is not met then the body is not evaluated, thus improving performance by not evaluating a body of code that isn't being used.
Does Scala's implementation of while already use pass-by-name parameters?
If there's a reason or specific cases where it's not possible for the implementation to use pass-by-name parameters, please explain to me, I haven't been able to find any information on it so far.
EDIT: As per Valy Dia's (https://stackoverflow.com/users/5826349/valy-dia) answer, I would like to add another question...
Would a method implementation of the while statement perform better than the statement itself if it's possible not to evaluate the body at all for certain cases? If so, why use the while statement at all?
I will try to test this, but I'm new to Scala so it might take me some time. If someone would like to explain, that would be great.
Cheers!
The while statement is not a method, so the terminology by-name parameter is not really relevant... Having said so the while statement has the following structure:
while(condition){
body
}
where the condition is repeatedly evaluated and the body is evaluated only upon the condition being verified, as show this small examples:
scala> while(false){ throw new Exception("Boom") }
// Does nothing
scala> while(true){ throw new Exception("Boom") }
// java.lang.Exception: Boom
scala> while(throw new Exception("boom")){ println("hello") }
// java.lang.Exception: Boom
Would a method implementation of the while statement perform better than the statement itself if it's possible not to evaluate the body at all for certain cases?
No. The built-in while also does not evaluate the body at all unless it has to, and it is going to compile to much more efficient code (because it does not need to introduce the "thunks"/closures/lambdas/anonymous functions that are used to implement "pass-by-name" under the hood).
The example in the book was just showing how you could implement it with functions if there was no built-in while statement.
I assumed that they were also inferring that the while statement's body will be evaluated whether or not the condition was met
No, that would make the built-in while totally useless. That is not what they were driving at. They wanted to say that you can do this kind of thing with "call-by-name" (as opposed to "call-by-value", not as opposed to what the while loop does -- because the latter also works like that).
The main takeaway is that you can build something that looks like a control structure in Scala, because you have syntactic sugar like "call-by-name" and "last argument group taking a function can be called with a block".

Understanding the continuation theorem in Scala

So, I was trying to learn about Continuation. I came across with the following saying (link):
Say you're in the kitchen in front of the refrigerator, thinking about a sandwich. You take a continuation right there and stick it in your pocket. Then you get some turkey and bread out of the refrigerator and make yourself a sandwich, which is now sitting on the counter. You invoke the continuation in your pocket, and you find yourself standing in front of the refrigerator again, thinking about a sandwich. But fortunately, there's a sandwich on the counter, and all the materials used to make it are gone. So you eat it. :-) — Luke Palmer
Also, I saw a program in Scala:
var k1 : (Unit => Sandwich) = null
reset {
shift { k : Unit => Sandwich) => k1 = k }
makeSandwich
}
val x = k1()
I don't really know the syntax of Scala (looks similar to Java and C mixed together) but I would like to understand the concept of Continuation.
Firstly, I tried to run this program (by adding it into main). But it fails, I think that it has a syntax error due to the ) near Sandwich but I'm not sure. I removed it but it still does not compile.
How to create a fully compiled example that shows the concept of the story above?
How this example shows the concept of Continuation.
In the link above there was the following saying: "Not a perfect analogy in Scala because makeSandwich is not executed the first time through (unlike in Scheme)". What does it mean?
Since you seem to be more interested in the concept of the "continuation" rather than specific code, let's forget about that code for a moment (especially because it is quite old and I don't really like those examples because IMHO you can't understand them correctly unless you already know what a continuation is).
Note: this is a very long answer with some attempts to describe what a continuations is and why it is useful. There are some examples in Scala-like pseudo-code none of which can actually be compiled and run (there is just one compilable example at the very end and it references another example from the middle of the answer). Expect to spend a significant amount of time just reading this answer.
Intro to continuations
Probably the first thing you should do to understand a continuation is to forget about how modern compilers for most of the imperative languages work and how most of the modern CPUs work and particularly the idea of the call stack. This is actually implementation details (although quite popular and quite useful in practice).
Assume you have a CPU that can execute some sequence of instructions. Now you want to have a high level languages that support the idea of methods that can call each other. The obvious problem you face is that the CPU needs some "forward only" sequence of commands but you want some way to "return" results from a sub-program to the caller. Conceptually it means that you need to have some way to store somewhere before the call all the state of the caller method that is required for it to continue to run after the result of the sub-program is computed, pass it to the sub-program and then ask the sub-program at the end to continue execution from that stored state. This stored state is exactly a continuation. In most of the modern environments those continuations are stored on the call stack and often there are some assembly instructions specifically designed to help handling it (like call and return). But again this is just implementation details. Potentially they might be stored in an arbitrary way and it will still work.
So now let's re-iterate this idea: a continuation is a state of the program at some point that is enough to continue its execution from that point, typically with no additional input or some small known input (like a return value of the called method). Running a continuation is different from a method call in that usually continuation never explicitly returns execution control back to the caller, it can only pass it to another continuation. Potentially you can create such a state yourself, but in practice for the feature to be useful you need some support from the compiler to build continuations automatically or emulate it in some other way (this is why the Scala code you see requires a compiler plugin).
Asynchronous calls
Now there is an obvious question: why continuations are useful at all? Actually there are a few more scenarios besides the simple "return" case. One such scenario is asynchronous programming. Actually if you do some asynchronous call and provide a callback to handle the result, this can be seen as passing a continuation. Unfortunately most of the modern languages do not support automatic continuations so you have to grab all the relevant state yourself. Another problem appears if you have some logic that needs a sequence of many async calls. And if some of the calls are conditional, you easily get to the callbacks hell. The way continuations help you avoid it is by allowing you build a method with effectively inverted control flow. With typical call it is the caller that knows the callee and expects to get a result back in a synchronous way. With continuations you can write a method with several "entry points" (or "return to points") for different stages of the processing logic that you can just pass to some other method and that method can still return to exactly that position.
Consider following example (in pseudo-code that is Scala-like but is actually far from the real Scala in many details):
def someBusinessLogic() = {
val userInput = getIntFromUser()
val firstServiceRes = requestService1(userInput)
val secondServiceRes = if (firstServiceRes % 2 == 0) requestService2v1(userInput) else requestService2v2(userInput)
showToUser(combineUserInputAndResults(userInput,secondServiceRes))
}
If all those calls a synchronous blocking calls, this code is easy. But assume all those get and request calls are asynchronous. How to re-write the code? The moment you put the logic in callbacks you loose the clarity of the sequential code. And here is where continuations might help you:
def someBusinessLogicCont() = {
// the method entry point
val userInput
getIntFromUserAsync(cont1, captureContinuationExpecting(entry1, userInput))
// entry/return point after user input
entry1:
val firstServiceRes
requestService1Async(userInput, captureContinuationExpecting(entry2, firstServiceRes))
// entry/return point after the first request to the service
entry2:
val secondServiceRes
if (firstServiceRes % 2 == 0) {
requestService2v1Async(userInput, captureContinuationExpecting(entry3, secondServiceRes))
// entry/return point after the second request to the service v1
entry3:
} else {
requestService2v2Async(userInput, captureContinuationExpecting(entry4, secondServiceRes))
// entry/return point after the second request to the service v2
entry4:
}
showToUser(combineUserInputAndResults(userInput, secondServiceRes))
}
It is hard to capture the idea in a pseudo-code. What I mean is that all those Async method never return. The only way to continue execution of the someBusinessLogicCont is to call the continuation passed into the "async" method. The captureContinuationExpecting(label, variable) call is supposed to create a continuation of the current method at the label with the input (return) value bound to the variable. With such a re-write you still has a sequential-looking business logic even with all those asynchronous calls. So now for a getIntFromUserAsync the second argument looks like just another asynchronous (i.e. never-returning) method that just requires one integer argument. Let's call this type Continuation[T]
trait Continuation[T] {
def continue(value: T):Nothing
}
Logically Continuation[T] looks like a function T => Unit or rather T => Nothing where Nothing as the return type signifies that the call actually never returns (note, in actual Scala implementation such calls do return, so no Nothing there, but I think conceptually it is easy to think about no-return continuations).
Internal vs external iteration
Another example is a problem of iteration. Iteration can be internal or external. Internal iteration API looks like this:
trait CollectionI[T] {
def forEachInternal(handler: T => Unit): Unit
}
External iteration looks like this:
trait Iterator[T] {
def nextValue(): Option[T]
}
trait CollectionE[T] {
def forEachExternal(): Iterator[T]
}
Note: often Iterator has two method like hasNext and nextValue returning T but it will just make the story a bit more complicated. Here I use a merged nextValue returning Option[T] where the value None means the end of the iteration and Some(value) means the next value.
Assuming the Collection is implemented by something more complicated than an array or a simple list, for example some kind of a tree, there is a conflict here between the implementer of the API and the API user if you use typical imperative language. And the conflict is over the simple question: who controls the stack (i.e. the easy to use state of the program)? The internal iteration is easier for the implementer because he controls the stack and can easily store whatever state is needed to move to the next item but for the API user the things become tricky if she wants to do some aggregation of the stored data because now she has to save the state between the calls to the handler somewhere. Also you need some additional tricks to let the user stop the iteration at some arbitrary place before the end of the data (consider you are trying to implement find via forEach). Conversely the external iteration is easy for the user: she can store all the state necessary to process data in any way in local variables but the API implementer now has to store his state between calls to the nextValue somewhere else. So fundamentally the problem arises because there is only one place to easily store the state of "your" part of the program (the call stack) and two conflicting users for that place. It would be nice if you could just have two different independent places for the state: one for the implementer and another for the user. And continuations provide exactly that. The idea is that we can pass execution control between two methods back and forth using two continuations (one for each part of the program). Let's change the signatures to:
// internal iteration
// continuation of the iterator
type ContIterI[T] = Continuation[(ContCallerI[T], ContCallerLastI)]
// continuation of the caller
type ContCallerI[T] = Continuation[(T, ContIterI[T])]
// last continuation of the caller
type ContCallerLastI = Continuation[Unit]
// external iteration
// continuation of the iterator
type ContIterE[T] = Continuation[ContCallerE[T]]
// continuation of the caller
type ContCallerE[T] = Continuation[(Option[T], ContIterE[T])]
trait Iterator[T] {
def nextValue(cont : ContCallerE[T]): Nothing
}
trait CollectionE[T] {
def forEachExternal(): Iterator[T]
}
trait CollectionI[T] {
def forEachInternal(cont : ContCallerI[T]): Nothing
}
Here ContCallerI[T] type, for example, means that this is a continuation (i.e. a state of the program) the expects two input parameters to continue running: one of type T (the next element) and another of type ContIterI[T] (the continuation to switch back). Now you can see that the new forEachInternal and the new forEachExternal+Iterator have almost the same signatures. The only difference in how the end of the iteration is signaled: in one case it is done by returning None and in other by passing and calling another continuation (ContCallerLastI).
Here is a naive pseudo-code implementation of a sum of elements in an array of Int using these signatures (an array is used instead of something more complicated to simplify the example):
class ArrayCollection[T](val data:T[]) : CollectionI[T] {
def forEachInternal(cont0 : ContCallerI[T], lastCont: ContCallerLastI): Nothing = {
var contCaller = cont0
for(i <- 0 to data.length) {
val contIter = captureContinuationExpecting(label, contCaller)
contCaller.continue(data(i), contIter)
label:
}
}
}
def sum(arr: ArrayCollection[Int]): Int = {
var sum = 0
val elem:Int
val iterCont:ContIterI[Int]
val contAdd0 = captureContinuationExpecting(labelAdd, elem, iterCont)
val contLast = captureContinuation(labelReturn)
arr.forEachInternal(contAdd0, contLast)
labelAdd:
sum += elem
val contAdd = captureContinuationExpecting(labelAdd, elem, iterCont)
iterCont.continue(contAdd)
// note that the code never execute this line, the only way to jump out of labelAdd is to call contLast
labelReturn:
return sum
}
Note how both implementations of the forEachInternal and of the sum methods look fairly sequential.
Multi-tasking
Cooperative multitasking also known as coroutines is actually very similar to the iterations example. Cooperative multitasking is an idea that the program can voluntarily give up ("yield") its execution control either to the global scheduler or to another known coroutine. Actually the last (re-written) example of sum can be seen as two coroutines working together: one doing iteration and another doing summation. But more generally your code might yield its execution to some scheduler that then will select which other coroutine to run next. And what the scheduler does is manages a bunch of continuations deciding which to continue next.
Preemptive multitasking can be seen as a similar thing but the scheduler is run by some hardware interruption and then the scheduler needs a way to create a continuation of the program being executed just before the interruption from the outside of that program rather than from the inside.
Scala examples
What you see is a really old article that is referring to Scala 2.8 (while current versions are 2.11, 2.12, and soon 2.13). As #igorpcholkin correctly pointed out, you need to use a Scala continuations compiler plugin and library. The sbt compiler plugin page has an example how to enable exactly that plugin (for Scala 2.12 and #igorpcholkin's answer has the magic strings for Scala 2.11):
val continuationsVersion = "1.0.3"
autoCompilerPlugins := true
addCompilerPlugin("org.scala-lang.plugins" % "scala-continuations-plugin_2.12.2" % continuationsVersion)
libraryDependencies += "org.scala-lang.plugins" %% "scala-continuations-library" % continuationsVersion
scalacOptions += "-P:continuations:enable"
The problem is that plugin is semi-abandoned and is not widely used in practice. Also the syntax has changed since the Scala 2.8 times so it is hard to get those examples running even if you fix the obvious syntax bugs like missing ( here and there. The reason of that state is stated on the GitHub as:
You may also be interested in https://github.com/scala/async, which covers the most common use case for the continuations plugin.
What that plugin does is emulates continuations using code-rewriting (I suppose it is really hard to implement true continuations over the JVM execution model). And under such re-writings a natural thing to represent a continuation is some function (typically called k and k1 in those examples).
So now if you managed to read the wall of text above, you can probably interpret the sandwich example correctly. AFAIU that example is an example of using continuation as means to emulate "return". If we re-sate it with more details, it could go like this:
You (your brain) are inside some function that at some points decides that it wants a sandwich. Luckily you have a sub-routine that knows how to make a sandwich. You store your current brain state as a continuation into the pocket and call the sub-routine saying to it that when the job is done, it should continue the continuation from the pocket. Then you make a sandwich according to that sub-routine messing up with your previous brain state. At the end of the sub-routine it runs the continuation from the pocket and you return to the state just before the call of the sub-routine, forget all your state during that sub-routine (i.e. how you made the sandwich) but you can see the changes in the outside world i.e. that the sandwich is made now.
To provide at least one compilable example with the current version of the scala-continuations, here is a simplified version of my asynchronous example:
case class RemoteService(private val readData: Array[Int]) {
private var readPos = -1
def asyncRead(callback: Int => Unit): Unit = {
readPos += 1
callback(readData(readPos))
}
}
def readAsyncUsage(rs1: RemoteService, rs2: RemoteService): Unit = {
import scala.util.continuations._
reset {
val read1 = shift(rs1.asyncRead)
val read2 = if (read1 % 2 == 0) shift(rs1.asyncRead) else shift(rs2.asyncRead)
println(s"read1 = $read1, read2 = $read2")
}
}
def readExample(): Unit = {
// this prints 1-42
readAsyncUsage(RemoteService(Array(1, 2)), RemoteService(Array(42)))
// this prints 2-1
readAsyncUsage(RemoteService(Array(2, 1)), RemoteService(Array(42)))
}
Here remote calls are emulated (mocked) with a fixed data provided in arrays. Note how readAsyncUsage looks like a totally sequential code despite the non-trivial logic of which remote service to call in the second read depending on the result of the first read.
For full example you need prepare Scala compiler to use continuations and also use a special compiler plugin and library.
The simplest way is a create a new sbt.project in IntellijIDEA with the following files: build.sbt - in the root of the project, CTest.scala - inside main/src.
Here is contents of both files:
build.sbt:
name := "ContinuationSandwich"
version := "0.1"
scalaVersion := "2.11.6"
autoCompilerPlugins := true
addCompilerPlugin(
"org.scala-lang.plugins" % "scala-continuations-plugin_2.11.6" % "1.0.2")
libraryDependencies +=
"org.scala-lang.plugins" %% "scala-continuations-library" % "1.0.2"
scalacOptions += "-P:continuations:enable"
CTest.scala:
import scala.util.continuations._
object CTest extends App {
case class Sandwich()
def makeSandwich = {
println("Making sandwich")
new Sandwich
}
var k1 : (Unit => Sandwich) = null
reset {
shift { k : (Unit => Sandwich) => k1 = k }
makeSandwich
}
val x = k1()
}
What the code above essentially does is calling makeSandwich function (in a convoluted manner). So execution result would be just printing "Making sandwich" into console. The same result would be achieved without continuations:
object CTest extends App {
case class Sandwich()
def makeSandwich = {
println("Making sandwich")
new Sandwich
}
val x = makeSandwich
}
So what's the point? My understanding is that we want to "prepare a sandwich", ignoring the fact that we may be not ready for that. We mark a point of time where we want to return to after all necessary conditions are met (i.e. we have all necessary ingredients ready). After we fetch all ingredients we can return to the mark and "prepare a sandwich", "forgetting that we were unable to do that in past". Continuations allow us to "mark point of time in past" and return to that point.
Now step by step. k1 is a variable to hold a pointer to a function which should allow to "create sandwich". We know it because k1 is declared so: (Unit => Sandwich).
However initially the variable is not initialized (k1 = null, "there are no ingredients to make a sandwich yet"). So we can't call the function preparing sandwich using that variable yet.
So we mark a point of execution where we want to return to (or point of time in past we want to return to) using "reset" statement.
makeSandwich is another pointer to a function which actually allows to make a sandwich. It's the last statement of "reset block" and hence it is passed to "shift" (function) as argument (shift { k : (Unit => Sandwich).... Inside shift we assign that argument to k1 variable k1 = k thus making k1 ready to be called as a function. After that we return to execution point marked by reset. The next statement is execution of function pointed to by k1 variable which is now properly initialized so finally we call makeSandwich which prints "Making sandwich" to a console. It also returns an instance of sandwich class which is assigned to x variable.
Not sure, probably it means that makeSandwich is not called inside reset block but just afterwards when we call it as k1().

Is it safe to remove elements from a collection.mutable.HashSet during iteration?

A mutable Set's retain method is implemented as follows:
def retain(p: A => Boolean): Unit =
for (elem <- this.toList) // SI-7269 toList avoids ConcurrentModificationException
if (!p(elem)) this -= elem
But if I implement my own method that doesn't make a copy for iterating, nothing blows up.
def dumbRetain[A](self: mutable.Set[A], p: A => Boolean): Unit =
for (elem <- self)
if (!p(elem)) self -= elem
dumbRetain(mutable.HashSet(1,2,3,4,5,6), Set(2,4,6))
// everything is ok
I see that SI-7269's test case uses the JavaConversions wrapper around a java Set/Map, and it seems like the issue arises from the underlying java collection.
I know there will never be a java collection passed to my algorithm, so can I use dumbRetain without worrying about the ConcurrentModificationException? Or is this "coincidental behavior" that I shouldn't rely on?
edit to clarify, I would be using dumbRetain as an implementation detail in an algorithm which would be in full control of what it passes to dumbRetain. And this would be run in a single-threaded context.
This seems to rely on the specific implementation of mutable.HashSet, and there is nothing in the API that guarantees that it would work for all other implementations of mutable.Set, even if we exclude all wrappers for the Java collections.
The for-loop
for (elem <- self) {
...
}
is desugared into foreach, which for mutable.HashSet is implemented as follows:
override def foreach[U](f: A => U) {
var i = 0
val len = table.length
while (i < len) {
val curEntry = table(i)
if (curEntry ne null) f(entryToElem(curEntry))
i += 1
}
}
Essentially, it simply loops through the Array of the underlying FlatHashTable, and invokes the passed function f on every element. The whole foreach simply does not have any lines which could throw anything, it doesn't check for concurrent [footnote-1] modifications at all.
A ConcurrentModificationException seems to be the less troubling case: at least, your program fails fast, and even returns a detailed stack trace that points to the line in which the problem occurred. It would be actually much worse if it simply deteriorated into undefined behavior without throwing anything. This would be the worst case. However, this worst case shouldn't occur for collections from the standard library: Throw ConcurrentModificationException exception's in scala collections? #188
Quote:
In scala/scala#5295 (merged in to 2.12.x) I made sure that removing the element last returned from an iterator would not cause a problem for the iterator.
So, as long as you clearly state in the documentation that only the collections from standard library are supported, you will most likely not have any problems using it in your own code. But if you use it in a public interface, this would be an invitation for a bug analogous to "SI-7269" quoted in your question.
[footnote-1] "concurrent" as in "ConcurrentModificationException", not as in "concurrently executed threads".
EDIT: I've tried to choose less ambiguous formulations. Great Thanks #Dima for the feedback and the numerous suggestions.
Yeah, you can do it, as long as you are sure this is the scala's native HashSet implementation, not a wrapper around java ... and with understanding, that this is not thread-safe, and should never be used concurrently (the original HashSet.retain is that way too as well as the other mutators).
Better yet, just use immutable Set.filter, unless you actually have real hard evidence (not just intuition) demonstrating that your specific case absolutely requires mutable container.

What are advantages of a Twitter Future over a Scala Future?

I know a lot of reasons for Scala Future to be better. Are there any reasons to use Twitter Future instead? Except the fact Finagle uses it.
Disclaimer: I worked at Twitter on the Future implementation. A little bit of context, we started our own implementation before Scala had a "good" implementation of Future.
Here're the features of Twitter's Future:
Some method names are different and Twitter's Future has some new helper methods in the companion.
e.g. Just one example: Future.join(f1, f2) can work on heterogeneous Future types.
Future.join(
Future.value(new Object), Future.value(1)
).map {
case (o: Object, i: Int) => println(o, i)
}
o and i keep their types, they're not casted into the least common supertype Any.
A chain of onSuccess is guaranteed to be executed in order:
e.g.:
f.onSuccess {
println(1) // #1
} onSuccess {
println(2) // #2
}
#1 is guaranteed to be executed before #2
The Threading model is a little bit different. There's no notion of ExecutionContext, the Thread that set the value in a Promise (Mutable implementation of a Future) is the one executing all the computations in the future graph.
e.g.:
val f1 = new Promise[Int]
f1.map(_ * 2).map(_ + 1)
f1.setValue(2) // <- this thread also executes *2 and +1
There's a notion of interruption/cancellation. With Scala's Futures, the information only flows in one direction, with Twitter's Future, you can notify a producer of some information (not necessarily a cancellation). In practice, it's used in Finagle to propagate the cancellation of a RPC. Because Finagle also propagates the cancellation across the network and because Twitter has a huge fan out of requests, this actually saves lots of work.
class MyMessage extends Exception
val p = new Promise[Int]
p.setInterruptHandler {
case ex: MyMessage => println("Receive MyMessage")
}
val f = p.map(_ + 1).map(_ * 2)
f.raise(new MyMessage) // print "Receive MyMessage"
Until recently, Twitter's Future were the only one to implement efficient tail recursion (i.e. you can have a recursive function that call itself without blowing up you call stack). It has been implemented in Scala 2.11+ (I believe).
As far as I can tell the main difference that could go in favor of using Twitter's Future is that it can be cancelled, unlike scala's Future.
Also, there used to be some support for tracing the call chains (as you probably know plain stack traces are close to being useless when using Futures). In other words, you could take a Future and tell what chain of map/flatMap produced it. But the idea has been abandoned if I understand correctly.

scala 2.8 control Exception - what is the point?

In the upcoming scala 2.8, a util.control package has been added which includes a break library and a construct for handling exceptions so that code which looks like:
type NFE = NumberFormatException
val arg = "1"
val maybeInt = try { Some(arg.toInt) } catch { case e: NFE => None }
Can be replaced with code like:
import util.control.Exception._
val maybeInt = catching(classOf[NFE]) opt arg.toInt
My question is why? What does this add to the language other than providing another (and radically different) way to express the same thing? Is there anything which can be expressed using the new control but not via try-catch? Is it a DSL supposed to make exception-handling in Scala look like some other language (and if so, which one)?
There are two ways to think about exceptions. One way is to think of them as flow control: an exception changes the flow of execution of the program, making the execution jump from one place to another. A second way is to think of them as data: an exception is an information about the execution of the program, which can then be used as input to other parts of the program.
The try/catch paradigm used in C++ and Java is very much of the first kind(*).
If, however, if you prefer to deal with exceptions as data, then you'll have to resort to code such as the one shown. For the simple case, that's rather easy. However, when it comes to the functional style where composition is king, things start to get complicated. You either have to duplicate code all around, or you roll your own library to deal with it.
Therefore, in a language which purports to support both functional and OO style, one shouldn't be surprised to see library support for treating exceptions as data.
And note that there is oh-so-many other possibilities provided by Exception to handle things. You can, for instance, chain catch handlers, much in the way that Lift chain partial functions to make it easy to delegate responsibility over the handling of web page requests.
Here is one example of what can be done, since automatic resource management is in vogue these days:
def arm[T <: java.io.Closeable,R](resource: T)(body: T => R)(handlers: Catch[R]):R = (
handlers
andFinally (ignoring(classOf[Any]) { resource.close() })
apply body(resource)
)
Which gives you a safe closing of the resource (note the use of ignoring), and still applies any catching logic you may want to use.
(*) Curiously, Forth's exception control, catch&throw, is a mix of them. The flow jumps from throw to catch, but then that information is treated as data.
EDIT
Ok, ok, I yield. I'll give an example. ONE example, and that's it! I hope this isn't too contrived, but there's no way around it. This kind of thing would be most useful in large frameworks, not in small samples.
At any rate, let's first define something to do with the resource. I decided on printing lines and returning the number of lines printed, and here is the code:
def linePrinter(lnr: java.io.LineNumberReader) = arm(lnr) { lnr =>
var lineNumber = 0
var lineText = lnr.readLine()
while (null != lineText) {
lineNumber += 1
println("%4d: %s" format (lineNumber, lineText))
lineText = lnr.readLine()
}
lineNumber
} _
Here is the type of this function:
linePrinter: (lnr: java.io.LineNumberReader)(util.control.Exception.Catch[Int]) => Int
So, arm received a generic Closeable, but I need a LineNumberReader, so when I call this function I need to pass that. What I return, however, is a function Catch[Int] => Int, which means I need to pass two parameters to linePrinter to get it to work. Let's come up with a Reader, now:
val functionText = """def linePrinter(lnr: java.io.LineNumberReader) = arm(lnr) { lnr =>
var lineNumber = 1
var lineText = lnr.readLine()
while (null != lineText) {
println("%4d: %s" format (lineNumber, lineText))
lineNumber += 1
lineText = lnr.readLine()
}
lineNumber
} _"""
val reader = new java.io.LineNumberReader(new java.io.StringReader(functionText))
So, now, let's use it. First, a simple example:
scala> linePrinter(new java.io.LineNumberReader(reader))(noCatch)
1: def linePrinter(lnr: java.io.LineNumberReader) = arm(lnr) { lnr =>
2: var lineNumber = 1
3: var lineText = lnr.readLine()
4: while (null != lineText) {
5: println("%4d: %s" format (lineNumber, lineText))
6: lineNumber += 1
7: lineText = lnr.readLine()
8: }
9: lineNumber
10: } _
res6: Int = 10
And if I try it again, I get this:
scala> linePrinter(new java.io.LineNumberReader(reader))(noCatch)
java.io.IOException: Stream closed
Now suppose I want to return 0 if any exception happens. I can do it like this:
linePrinter(new java.io.LineNumberReader(reader))(allCatch withApply (_ => 0))
What's interesting here is that I completely decoupled the exception handling (the catch part of try/catch) from the closing of the resource, which is done through finally. Also, the error handling is a value I can pass on to the function. At the very least, it makes mocking of try/catch/finally statements much easier. :-)
Also, I can combine multiple Catch using the or method, so that different layers of my code might choose to add different handlers for different exceptions. Which really is my main point, but I couldn't find an exception-rich interface (in the brief time I looked :).
I'll finish with a remark about the definition of arm I gave. It is not a good one. Particularly, I can't use Catch methods such as toEither or toOption to change the result from R to something else, which seriously decreases the value of using Catch in it. I'm not sure how to go about changing that, though.
As the guy who wrote it, the reasons are composition and encapsulation. The scala compiler (and I am betting most decent sized source bases) is littered with places which swallow all exceptions -- you can see these with -Ywarn-catches -- because the programmer is too lazy to enumerate the relevant ones, which is understandable because it's annoying. By making it possible to define, re-use, and compose catch and finally blocks independently of the try logic, I hoped to lower the barrier to writing sensible blocks.
And, it's not exactly finished, and I'm working on a million other areas too. If you look through the actors code you can see examples of huge cut-and-pasted multiply-nested try/catch/finally blocks. I was/am unwilling to settle for try { catch { try { catch { try { catch ...
My eventual plan is to have catch take an actual PartialFunction rather than require a literal list of case statements, which presents a whole new level of opportunity, i.e. try foo() catch getPF()
I'd say it's foremost a matter of style of expression.
Beyond being shorter than its equivalent, the new catching() method offers a more functional way to express the same behavior. Try ... catch statements are generally considered imperative style and exceptions are considered side-effects. Catching() puts a blanket on this imperative code to hide it from view.
More importantly, now that we have a function then it can be more easily composed with other things; it can be passed to other higher-order functions to create more sophisticated behavior. (You cannot pass a try .. catch statement with a parametrized exception type directly).
Another way to look at this is if Scala didn't offer this catching() function. Then most likely people would 're-invent' it independently and that would cause code duplication, and lead to more non-standard code. So I think the Scala designers believe this function is common enough to warrant including it in the standard Scala library. (And I agree)
alex