I'm trying to emulate a simple functional language using an actor based execution model where issues arose modelling if-expression.
Actor systems nowadays are used basically for speeding up all kind of stuff by avoiding OS locks and stalled threads or to make microservices less painful, but initially it was supposed to be an alternative model of computation in general [1][2], a contemporary take on it may be propagation networks. So this should be capable to cover any programming language construct and certainly an if, right?
While I'm aware that this is occasionally met with irritation, I saw one timid attempt to move towards recursive algorithms represented using akka actors (I've refurbished it and added further examples including the one given below). That attempt halted at function calls, but why not go further and also model operators and if conditions with actors? In fact the smalltalk language applies this model and yields a precursor of the actor concept, as has been pointed out in the accepted answers below.
Surprisingly recursive function calls aren't much of an issue, but if1 is, due to it's potentially stateful nature.
Given the clause C: if a then x else y, here's the problem:
My initial idea was, that C is an actor behaving like a function with 3 parameters (a,x,y) that returns either x or y depending on a. Being maximally parallel [2] a,x and y would be evaluated simultaneously and passed as messages to C. Now, this isn't really good, if C is the exit condition of a recursive function f, sending one branch of f in an infinite recursion. Also if x or y have side effects one can't just evaluate both of them. Let's take this recursive sum (which is not the usual factorial, stupid as such and could be made tail recursive, but that's not the point)
f(n) = if n <= 0
0
else
n + f(n -1)
Note, that I'd like to create an if-expression resembling the one of Scala, see the (spec, p. 88), or Haskell, for that matter, rather than an if-statement that relies on side-effects.
f(0) would cause 3 concurrent evaluations
n <= 0 (ok)
0 (ok)
n + f(n -1) (bad, introducing the weird behavior that the call to f(n) actually will return (yielding 0) but the evaluation of its branches continues infinitely)
I can see these options from here:
The whole computation becomes stateful and the evaluation of either x
or y only happens after a has been calculated (mandatory if x or y have side effects).
Some guarding mechanism gets introduced that renders x or y not
applicable for arguments outside a certain range upon the call of f. They might evaluate to some "not applicable" marker instead of a value, which will not be used in C anyway, since it comes from a branch which isn't relevant.
I'm not sure at this point, If I didn't miss out on the question at a fundamental level and there are obvious other approaches that I just don't see. Input appreciated :)
Btw. see this for an exhaustive list of conditional branching in different languages, without giving their semantics, and, not exhaustive, the wiki page on conditionals, with semantics, and this for a discussion how the question at hand is an issue till down to the level of hardware.
1 I'm aware that an if could be seen as a special case of pattern matching, but then the question is, how to model the different cases of a match expression using actors. But maybe that wasn't even intended in the first place, matching is just something that every actor can do without referring to other specialized "match-actors". On the other hand it has been stated that "everything is an actor", rather confusing[2]. Btw. does anybody have a clear notion what the [#message whatever] notation is meant to be in that paper? # is irritatingly undefined. Maybe smalltak gives a hint, there it indicates a symbol.
There is a little bit of a misconception in your question. In functional languages, if is not necessarily a function of three parameters. Rather, it is sometimes two functions of two parameters.
In particular, that is how the Church Encoding of Booleans works in λ-calculus: there are two functions, let's call them True and False. Both functions have two parameters. True simply returns the first argument, False simply returns the second argument.
First, let's define two functions called true and false. We could define them any way we want, they are completely arbitrary, but we will define them in a very special way which has some advantages as we will see later (I will use ECMAScript as a somewhat reasonable approximation of λ-calculus that is probably readable by a bigger portion of visitors to this site than λ-calculus itself):
const tru = (thn, _ ) => thn,
fls = (_ , els) => els;
tru is a function with two parameters which simply ignores its second argument and returns the first. fls is also a function with two parameters which simply ignores its first argument and returns the second.
Why did we encode tru and fls this way? Well, this way, the two functions not only represent the two concepts of true and false, no, at the same time, they also represent the concept of "choice", in other words, they are also an if/then/else expression! We evaluate the if condition and pass it the then block and the else block as arguments. If the condition evaluates to tru, it will return the then block, if it evaluates to fls, it will return the else block. Here's an example:
tru(23, 42);
// => 23
This returns 23, and this:
fls(23, 42);
// => 42
returns 42, just as you would expect.
There is a wrinkle, however:
tru(console.log("then branch"), console.log("else branch"));
// then branch
// else branch
This prints both then branch and else branch! Why?
Well, it returns the return value of the first argument, but it evaluates both arguments, since ECMAScript is strict and always evaluates all arguments to a function before calling the function. IOW: it evaluates the first argument which is console.log("then branch"), which simply returns undefined and has the side-effect of printing then branch to the console, and it evaluates the second argument, which also returns undefined and prints to the console as a side-effect. Then, it returns the first undefined.
In λ-calculus, where this encoding was invented, that's not a problem: λ-calculus is pure, which means it doesn't have any side-effects; therefore you would never notice that the second argument also gets evaluated. Plus, λ-calculus is lazy (or at least, it is often evaluated under normal order), meaning, it doesn't actually evaluate arguments which aren't needed. So, IOW: in λ-calculus the second argument would never be evaluated, and if it were, we wouldn't notice.
ECMAScript, however, is strict, i.e. it always evaluates all arguments. Well, actually, not always: the if/then/else, for example, only evaluates the then branch if the condition is true and only evaluates the else branch if the condition is false. And we want to replicate this behavior with our iff. Thankfully, even though ECMAScript isn't lazy, it has a way to delay the evaluation of a piece of code, the same way almost every other language does: wrap it in a function, and if you never call that function, the code will never get executed.
So, we wrap both blocks in a function, and at the end call the function that is returned:
tru(() => console.log("then branch"), () => console.log("else branch"))();
// then branch
prints then branch and
fls(() => console.log("then branch"), () => console.log("else branch"))();
// else branch
prints else branch.
We could implement the traditional if/then/else this way:
const iff = (cnd, thn, els) => cnd(thn, els);
iff(tru, 23, 42);
// => 23
iff(fls, 23, 42);
// => 42
Again, we need some extra function wrapping when calling the iff function and the extra function call parentheses in the definition of iff, for the same reason as above:
const iff = (cnd, thn, els) => cnd(thn, els)();
iff(tru, () => console.log("then branch"), () => console.log("else branch"));
// then branch
iff(fls, () => console.log("then branch"), () => console.log("else branch"));
// else branch
Now that we have those two definitions, we can implement or. First, we look at the truth table for or: if the first operand is truthy, then the result of the expression is the same as the first operand. Otherwise, the result of the expression is the result of the second operand. In short: if the first operand is true, we return the first operand, otherwise we return the second operand:
const orr = (a, b) => iff(a, () => a, () => b);
Let's check out that it works:
orr(tru,tru);
// => tru(thn, _) {}
orr(tru,fls);
// => tru(thn, _) {}
orr(fls,tru);
// => tru(thn, _) {}
orr(fls,fls);
// => fls(_, els) {}
Great! However, that definition looks a little ugly. Remember, tru and fls already act like a conditional all by themselves, so really there is no need for iff, and thus all of that function wrapping at all:
const orr = (a, b) => a(a, b);
There you have it: or (plus other boolean operators) defined with nothing but function definitions and function calls in just a handful of lines:
const tru = (thn, _ ) => thn,
fls = (_ , els) => els,
orr = (a , b ) => a(a, b),
nnd = (a , b ) => a(b, a),
ntt = a => a(fls, tru),
xor = (a , b ) => a(ntt(b), b),
iff = (cnd, thn, els) => cnd(thn, els)();
Unfortunately, this implementation is rather useless: there are no functions or operators in ECMAScript which return tru or fls, they all return true or false, so we can't use them with our functions. But there's still a lot we can do. For example, this is an implementation of a singly-linked list:
const cons = (hd, tl) => which => which(hd, tl),
car = l => l(tru),
cdr = l => l(fls);
You may have noticed something peculiar: tru and fls play a double role, they act both as the data values true and false, but at the same time, they also act as a conditional expression. They are data and behavior, bundled up into one … uhm … "thing" … or (dare I say) object! Does this idea of identifying data and behavior remind us of anything?
Indeed, tru and fls are objects. And, if you have ever used Smalltalk, Self, Newspeak or other pure object-oriented languages, you will have noticed that they implement booleans in exactly the same way: two objects true and false which have method named if that takes two blocks (functions, lambdas, whatever) as arguments and evaluates one of them.
Here's an example of what it might look like in Scala:
sealed abstract trait Buul {
def apply[T, U <: T, V <: T](thn: ⇒ U)(els: ⇒ V): T
def &&&(other: ⇒ Buul): Buul
def |||(other: ⇒ Buul): Buul
def ntt: Buul
}
case object Tru extends Buul {
override def apply[T, U <: T, V <: T](thn: ⇒ U)(els: ⇒ V): U = thn
override def &&&(other: ⇒ Buul) = other
override def |||(other: ⇒ Buul): this.type = this
override def ntt = Fls
}
case object Fls extends Buul {
override def apply[T, U <: T, V <: T](thn: ⇒ U)(els: ⇒ V): V = els
override def &&&(other: ⇒ Buul): this.type = this
override def |||(other: ⇒ Buul) = other
override def ntt = Tru
}
object BuulExtension {
import scala.language.implicitConversions
implicit def boolean2Buul(b: ⇒ Boolean) = if (b) Tru else Fls
}
import BuulExtension._
(2 < 3) { println("2 is less than 3") } { println("2 is greater than 3") }
// 2 is less than 3
Given the very close relationship between OO and actors (they are pretty much the same thing, actually), which is not historically surprising (Alan Kay based Smalltalk on Carl Hewitt's PLANNER; Carl Hewitt based Actors on Alan Kay's Smalltalk), I wouldn't be surprised if this turned out to be a step in the right direction to solve your problem.
Q : didn't ( I ) miss out on the question at a fundamental level?
Yes, you happened to have missed a cardinal point: even the functional-languages, that may otherwise enjoy the forms of AND- and/or OR-based fine-grain sorts of parallelism, do not grow as wild as not to respect the strictly [SERIAL] nature of the if ( expression1 ) expression2 [ else expression3 ]
You have spend many efforts on argumentation about recursion-case(s) whereas the principal property was left out of your view. State-fullness is the Mother Nature of the computing ( these toys are nothing but finite state automata, no matter how large the state-space might be, it is and always will principally remain finite ).
Even the cited Scala p.88 confirms this: "The conditional expression is evaluated by evaluating first e1. If this evaluates to true, the result of evaluating e2 is returned, otherwise the result of evaluating e3 is returned." - which is a pure-[SERIAL] process-recipe ( one step after another ).
One may remember, that even the evaluation of expression1 may have (and does have ) state-change effects ( not only "side-effects" ), but indeed state-change effects ( PRNG-steps into a new state whenever a random-number was asked to get generated and many similar situations )
Thus the if e1 then e2 else e3 has to obey a pure-[SERIAL] implementation, no matter what benefits may be brought into action from fine-grain {AND|OR}-based-parallelism ( may see many working examples thereof in languages that can use 'em right since late 70-ies early 80-ies )
Related
I was recently reading Category Theory for Programmers and in one of the challenges, Bartosz proposed to write a function called memoize which takes a function as an argument and returns the same one with the difference that, the first time this new function is called, it stores the result of the argument and then returns this result each time it is called again.
def memoize[A, B](f: A => B): A => B = ???
The problem is, I can't think of any way to implement this function without resorting to mutability. Moreover, the implementations I have seen uses mutable data structures to accomplish the task.
My question is, is there a purely functional way of accomplishing this? Maybe without mutability or by using some functional trick?
Thanks for reading my question and for any future help. Have a nice day!
is there a purely functional way of accomplishing this?
No. Not in the narrowest sense of pure functions and using the given signature.
TLDR: Use mutable collections, it's okay!
Impurity of g
val g = memoize(f)
// state 1
g(a)
// state 2
What would you expect to happen for the call g(a)?
If g(a) memoizes the result, an (internal) state has to change, so the state is different after the call g(a) than before.
As this could be observed from the outside, the call to g has side effects, which makes your program impure.
From the Book you referenced, 2.5 Pure and Dirty Functions:
[...] functions that
always produce the same result given the same input and
have no side effects
are called pure functions.
Is this really a side effect?
Normally, at least in Scala, internal state changes are not considered side effects.
See the definition in the Scala Book
A pure function is a function that depends only on its declared inputs and its internal algorithm to produce its output. It does not read any other values from “the outside world” — the world outside of the function’s scope — and it does not modify any values in the outside world.
The following examples of lazy computations both change their internal states, but are normally still considered purely functional as they always yield the same result and have no side effects apart from internal state:
lazy val x = 1
// state 1: x is not computed
x
// state 2: x is 1
val ll = LazyList.continually(0)
// state 1: ll = LazyList(<not computed>)
ll(0)
// state 2: ll = LazyList(0, <not computed>)
In your case, the equivalent would be something using a private, mutable Map (as the implementations you may have found) like:
def memoize[A, B](f: A => B): A => B = {
val cache = mutable.Map.empty[A, B]
(a: A) => cache.getOrElseUpdate(a, f(a))
}
Note that the cache is not public.
So, for a pure function f and without looking at memory consumption, timings, reflection or other evil stuff, you won't be able to tell from the outside whether f was called twice or g cached the result of f.
In this sense, side effects are only things like printing output, writing to public variables, files etc.
Thus, this implementation is considered pure (at least in Scala).
Avoiding mutable collections
If you really want to avoid var and mutable collections, you need to change the signature of your memoize method.
This is, because if g cannot change internal state, it won't be able to memoize anything new after it was initialized.
An (inefficient but simple) example would be
def memoizeOneValue[A, B](f: A => B)(a: A): (B, A => B) = {
val b = f(a)
val g = (v: A) => if (v == a) b else f(v)
(b, g)
}
val (b1, g) = memoizeOneValue(f, a1)
val (b2, h) = memoizeOneValue(g, a2)
// ...
The result of f(a1) would be cached in g, but nothing else. Then, you could chain this and always get a new function.
If you are interested in a faster version of that, see #esse's answer, which does the same, but more efficient (using an immutable map, so O(log(n)) instead of the linked list of functions above, O(n)).
Let's try(Note: I have change the return type of memoize to store the cached data):
import scala.language.existentials
type M[A, B] = A => T forSome { type T <: (B, A => T) }
def memoize[A, B](f: A => B): M[A, B] = {
import scala.collection.immutable
def withCache(cache: immutable.Map[A, B]): M[A, B] = a => cache.get(a) match {
case Some(b) => (b, withCache(cache))
case None =>
val b = f(a)
(b, withCache(cache + (a -> b)))
}
withCache(immutable.Map.empty)
}
def f(i: Int): Int = { print(s"Invoke f($i)"); i }
val (i0, m0) = memoize(f)(1) // f only invoked at first time
val (i1, m1) = m0(1)
val (i2, m2) = m1(1)
Yes there is pure functional ways to implement polymorphic function memoization. The topic is surprisingly deep and even summons the Yoneda Lemma, which is likely what Bartosz had in mind with this exercise.
The blog post Memoization in Haskell gives a nice introduction by simplifying the problem a bit: instead of looking at arbitrary functions it restricts the problem to functions from the integers.
The following memoize function takes a function of type Int -> a and
returns a memoized version of the same function. The trick is to turn
a function into a value because, in Haskell, functions are not
memoized but values are. memoize converts a function f :: Int -> a
into an infinite list [a] whose nth element contains the value of f n.
Thus each element of the list is evaluated when it is first accessed
and cached automatically by the Haskell runtime thanks to lazy
evaluation.
memoize :: (Int -> a) -> (Int -> a)
memoize f = (map f [0 ..] !!)
Apparently the approach can be generalised to function of arbitrary domains. The trick is to come up with a way to use the type of the domain as an index into a lazy data structure used for "storing" previous values. And this is where the Yoneda Lemma comes in and my own understanding of the topic becomes flimsy.
For a datatype representing the natural numbers:
sealed trait Nat
case object Z extends Nat
case class S(pred: Nat) extends Nat
In Scala, here is an elementary way of implementing the corresponding catamorphism:
def cata[A](z: A)(l: Nat)(f: A => A): A = l match {
case Z => z
case S(xs) => f( cata(z)(xs)(f) )
}
However, since the recursive call to cata isn't in tail position, this can easily trigger a stack overflow.
What are alternative implementation options that will avoid this? I'd rather not go down the route of F-algebras unless the interface ultimately presented by the code can look pretty much as simple as the above.
EDIT: Looks like this might be directly relevant: Is it possible to use continuations to make foldRight tail recursive?
If you were implementing a catamorphism on lists, that would be what in Haskell we call a foldr. We know that foldr does not have a tail-recursive definition, but foldl does. So if you insist on a tail-recusive program, the right thing to do is reverse the list argument (tail-recursively, in linear time), then use a foldl in place of the foldr.
Your example uses the simpler data type of naturals (and a truly "efficient" implementation would use machine integers, but we'll agree to leave that aside). What is the reverse of one of your natural numbers? Just the number itself, because we can think of it as a list with no data in each node, so we can't tell the difference when it is reversed! And what's the equivalent of the foldl? It's the program (forgive the pseudocode)
def cata(z, a, f) = {
var x = a, y = z;
while (x != Z) {
y = f(y);
x = pred(x)
}
return y
}
Or as a Scala tail-recursion,
def cata[A](z: A)(a: Nat)(f: A => A): A = a match {
case Z => z
case S(b) => cata( f(z) )(b)(f)
}
Will that do?
Yes, this is exactly the motivating example in the paper Clowns to the left of me, jokers to the right
(Dissecting Data Structures) (updated, better, but non-free version here http://dl.acm.org/citation.cfm?id=1328474).
The basic idea is that you want to turn your recursive function into a loop, so you need to figure out a data structure that keeps track of the state of the procedure, which is
What you've calculuated so far
What you have left to do.
The type of this state depends on the structure of the type you're doing the fold over, at any point in the fold you are at some node of the tree and you need to remember the tree structure of "the rest of the tree".
The paper shows how you can calculate that state type mechanically. If you do this for Lists, you get that the state you need to keep track of is
The operation run on all the previous values.
The list of elements left to process.
Which is exactly what foldl keeps track of, so it's kind of a coincidence that foldl and foldr can be given the same type.
I am trying to understand the following code but can't.
it is supposed to create a child actor for an Event if it does not exist, otherwise says that the Event exist as it as an associated child actor.
context.child(name).fold(create())(_ => sender() ! EventExists)
But the fold here does not make sense to me. If the context.child is emtpty we get the creation and i understand that. However if there is children we are still going to create why ?
Akka's child returns an Option
As you can see from Option's scaladoc:
fold[B](ifEmpty: ⇒ B)(f: (A) ⇒ B): B Returns the result of
applying f to this scala.Option's value if the scala.Option is
nonempty. Otherwise, evaluates expression ifEmpty.
Or making it more clear:
This (fold) is equivalent to scala.Option map f getOrElse ifEmpty.
So the first parameter of fold is lazy (call-by-name) and evaluates only if Option is empty. The second parameter (a function) is called only if Option is not empty.
Experiment:
scala> Some(0).fold({println("1");1}){_ => println("2"); 2}
2
res0: Int = 2
scala> None.fold({println("1");1}){_ => println("2"); 2}
1
res1: Int = 1
Here's some readings about:
https://kwangyulseo.com/2014/05/21/scala-option-fold-vs-option-mapgetorelse/
And some critics of that approach:
http://www.nurkiewicz.com/2014/06/optionfold-considered-unreadable.html
But in Option.fold() the contract is different: folding function takes
just one parameter rather than two. If you read my previous article
about folds you know that reducing function always takes two
parameters: current element and accumulated value (initial value
during first iteration). But Option.fold() takes just one parameter:
current Option value! This breaks the consistency, especially when
realizing Option.foldLeft() and Option.foldRight() have correct
contract (but it doesn't mean they are more readable).
In this blog post about parallel collections in Scala http://beust.com/weblog/2011/08/15/scalas-parallel-collections/
This is mentioned on a comment by Daniel Spiewak:
Other people have already commented about the mkString example, so I’m
going to leave it alone. It does reflect a much larger point about
collection semantics in general. Basically, this is it: in the absence
of side-effects (in user code), parallel collections have the
exact same semantics as the sequential collections. Put another way:
forAll { (xs: Vector[A], f: A => B) =>
(xs.par map f) == (xs map f)
}
Does this mean that if there are no side effects parallelism is not achieved? If this is true can this point be expanded to explain why this the case?
Does this mean that if there are no side effects parallelism is not
achieved?
No, that's not what it means. When Daniel Spiewak says that
Basically, this is it: in the absence of side-effects (in user code),
parallel collections have the exact same semantics as the sequential
collections.
It means that if your function has no side-effects then using it to map over a simple collection or a parallel collection will yield the same outcome. Which is why:
Put another way: forAll { (xs: Vector[A], f: A => B) => (xs.par map f)
== (xs map f) }
if f is side-effect free.
So, it's actually the opposite: if there are side effects parallelism is not a good idea since the outcome will be inconsistent.
It will always be run in parallel, but the result may differ when side-effects are there.
Let's say A = Int and B = Int with following code:
var tmp = false
def f(i: Int) = if(!tmp) {tmp = true; 0} else i + 1
So here we have a function f with a side-effect. I assume that tmp is false before running the code.
Running Vector(1,2,3).map(f) will always result in Vector(0,3,4)
But Vector(1,2,3).par.map(f) can have different results. It can be Vector(0,3,4) but since its parallel, the second element maybe is mapped first etc. So something like this might happen Vector(2,0,4).
You can be sure that the result will be the same, when in this case f would not have side-effects.
Apart from:
case class A
... case which is quite useful?
Why do we need to use case in match? Wouldn't:
x match {
y if y > 0 => y * 2
_ => -1
}
... be much prettier and concise?
Or why do we need to use case when a function takes a tuple? Say, we have:
val z = List((1, -1), (2, -2), (3, -3)).zipWithIndex
Now, isn't:
z map { case ((a, b), i) => a + b + i }
... way uglier than just:
z map (((a, b), i) => a + b + i)
...?
First, as we know, it is possible to put several statements for the same case scenario without needing some separation notation, just a line jump, like :
x match {
case y if y > 0 => y * 2
println("test")
println("test2") // these 3 statements belong to the same "case"
}
If case was not needed, compiler would have to find a way to know when a line is concerned by the next case scenario.
For example:
x match {
y if y > 0 => y * 2
_ => -1
}
How compiler would know whether _ => -1 belongs to the first case scenario or represents the next case?
Moreover, how compiler would know that the => sign doesn't represent a literal function but the actual code for the current case?
Compiler would certainly need a kind of code like this allowing cases isolation:
(using curly braces, or anything else)
x match {
{y if y > 0 => y * 2}
{_ => -1} // confusing with literal function notation
}
And surely, solution (provided currently by scala) using case keyword is a lot more readable and understandable than putting some way of separation like curly braces in my example.
Adding to #Mik378's answer:
When you write this: (a, b) => something, you are defining an anonymous Function2 - a function that takes two parameters.
When you write this: case (a, b) => something, you are defining an anonymous PartialFunction that takes one parameter and matches it against a pair.
So you need the case keyword to differentiate between these two.
The second issue, anonymous functions that avoid the case, is a matter of debate:
https://groups.google.com/d/msg/scala-debate/Q0CTZNOekWk/z1eg3dTkCXoJ
Also: http://www.scala-lang.org/old/node/1260
For the first issue, the choice is whether you allow a block or an expression on the RHS of the arrow.
In practice, I find that shorter case bodies are usually preferable, so I can certainly imagine your alternative syntax resulting in crisper code.
Consider one-line methods. You write:
def f(x: Int) = 2 * x
then you need to add a statement. I don't know if the IDE is able to auto-add parens.
def f(x: Int) = { val res = 2*x ; res }
That seems no worse than requiring the same syntax for case bodies.
To review, a case clause is case Pattern Guard => body.
Currently, body is a block, or a sequence of statements and a result expression.
If body were an expression, you'd need braces for multiple statements, like a function.
I don't think => results in ambiguities since function literals don't qualify as patterns, unlike literals like 1 or "foo".
One snag might be: { case foo => ??? } is a "pattern matching anonymous function" (SLS 8.5). Obviously, if the case is optional or eliminated, then { foo => ??? } is ambiguous. You'd have to distinguish case clauses for anon funs (where case is required) and case clauses in a match.
One counter-argument for the current syntax is that, in an intuition deriving from C, you always secretly hope that your match will compile to a switch table. In that metaphor, the cases are labels to jump to, and a label is just the address of a sequence of statements.
The alternative syntax might encourage a more inlined approach:
x match {
C => c(x)
D => d(x)
_ => ???
}
#inline def c(x: X) = ???
//etc
In this form, it looks more like a dispatch table, and the match body recalls the Map syntax, Map(a -> 1, b -> 2), that is, a tidy simplification of the association.
One of the key aspects of code readability is the words that grab your attention. For example,
return grabs your attention when you see it because you know that it is such a decisive action (breaking out of the function and possible sending a value back to the caller).
Another example is break--not that I like break, but it gets your attention.
I would agree with #Mik378 that case in Scala is more readable than the alternatives. Besides the compiler confusion he mentions, it gets your attention.
I am all for concise code, but there is a line between concise and illegible. I will gladly make the trade of 4n characters (where n is the number of cases) for the substantial readability that I get in return.