Motivation for match expression syntax - scala

The syntax for match expressions is pretty nice:
expr match {
case Test(l1) => ...
...
}
But it's driving me nuts that I don't understand the motivation to why this this syntax is used instead of match (expr) ..., like branching statements in a decent C descendant!
I see no reasonably explanation for this. And I don't find the answer neither in Programming in Scala, the Scala web site, in this paper, this thesis, here on SO nor on the rest of the web.
It's not that it's anything wrong with it, just that it's a complete mystery. And when match work in this way, why not if and for also?
Does anybody know? I don't think I can stand using the language any longer without finding this out. I think about it all the time. I can hardly sleep at night.

To take a similar bit of Scala syntax, guards in pattern matching cases don't require parentheses around their conditional expressions—e.g., the following:
case i if i % 2 == 0 => i / 2
Is just as valid as this:
case i if (i % 2 == 0) => i / 2
Sticking to the C-family style would mean requiring the latter form, even though the parentheses aren't necessary for disambiguation. The Scala language designers decided that reducing line noise trumped maintaining the family resemblance in this case.
I'd guess that a similar motivation is at work in the match syntax, and to my eye match (expr) { ... } does indeed look pretty awful (and misleading) compared to expr match { ... }.
Also, just this afternoon I refactored someone else's x match { ... } on an Option to x map { ... } instead. match as an infix operator makes the similarity between these two expressions clear.
On the issue of why match isn't just a method, here's a five year-old question from David Pollak on the scala-debate mailing list:
Why is 'match' a language level construct rather than a method on Any?
And Martin Odersky's answer:
It used to be that way in Scala 1. I am no longer sure why we changed.
syntax highlighting? error reporting? not sure. I don't think it
matters much either way, though.
I'm with Martin on this one.
Note that there are a couple of practical differences (apart from the simple "dot or not" question). For example, this doesn't compile:
def foo[A, B](f: PartialFunction[A, B])(a: A) = a match f
If match were still a method on Any, requiring a literal bunch of cases would be a rather strange requirement.

Related

Is Scala pattern matching harmful to object orientation?

Disclaimer: I'm much more experienced in Java than in Scala (which I'm learning).
In Java, I have read several times that switch could be harmful to object orientation, especially when used against types (this kind of problems also led to this: http://www.antiifcampaign.com/).
In Scala, one of the introduction video lessons of Martin Odersky shows how pattern matching is a better alternative to multiple "low-level" isInstanceOf checks.
Although, pattern matching catches more flexible patterns than a simple Java switch, I still see the first as a generalization of the latter.
Don't pattern matching and "switch on types" roughly share the same fundamental approach?
Isn't pattern matching just some syntactic sugar hiding lots of isInstanceOf/asInstanceOf?
If not, how would pattern matching be more flexible (as in: resilient to change) than writing those low level checks ourselves (apart from the error-prone nature of this tedious task)?
Pattern matching may well be "syntactic sugar", in some senses, but it is a lot more than a code replacement.
a) The match construct will enforce completeness on the domain by making sure there are no uncovered cases, unless you specifically indicate that it is only a partial function.
b) The match syntax is much less verbose.
c) The match construct includes guards, predicates that qualify the match.
def mergeSort(a: List[Int], b: List[Int]): List[Int] = (a, b) match {
case (ah :: at, bh :: bt) if ah < bh =>
ah :: mergeSort(at, b)
case (ah :: at, bh :: bt) if ah >= bh =>
bh :: mergeSort(a, bt)
case (Nil, b) =>
b
case (a, Nil) =>
a
}
d) Deconstruction
...
case Node(x, Empty, Empty) =>
x - 52
...
The declarative form is much easier to read (given a bit of experience).
Edit: the term "syntactic sugar" is used frequently about Scala. However, type inferencing and "aggressive" type resolution such as is used in pattern matching make for powerful language constructs. As for the original question, pattern matching is independent of object orientation and provides powerful, type safe methods for accessing data that may make the use of classes less of a requirement.
If you have n types of nouns with m types of verbs, you have n x m noun-verb combinations to keep track of.
There are two good strategies for managing this complexity.
You can organize your code by nouns (classes), and each class needs to be able to deal with every verb (method). This makes it easy to add new nouns. Of course, when you add a new verb, it's annoying to go through each of your nouns and make sure that it can handle the new verb. This is the object-oriented approach.
You can also organize your code by verbs (functions), and each function needs to be able to deal with every noun (pattern). This makes it easy to add new verbs, Of course, when you add a new noun, it's annoying to go through each of your verbs and make sure that it can handle the new noun. This is the functional approach.
There's nothing clearly better about either approach, and both have been used very successfully. You can run into problems when you try to mix them, though, so generally you want to be very clear about which approach you are using for a given problem.
In Scala, pattern matching is done via extractors. Extractors are themselves objects and only have access to the public API of the object that is being pattern matched against.
Thus, pattern matching does not break object encapsulation.
Theoretically, all conditionals (including pattern matching) can be replaced by polymorphism. See Smalltalk as an example of a language which has no conditionals, no loops, no switches, no control structures of any kind except polymorphic message dispatch. And see Newspeak as an example of a Smalltalk-inspired language which has powerful pattern matching inspired by Scala and F# implemented as a library completely on top of polymorphic message dispatch.
When done this way, pattern matching is as object-oriented as can be.

What are practical uses of applicative style?

I am a Scala programmer, learning Haskell now. It's easy to find practical use cases and real world examples for OO concepts, such as decorators, strategy pattern etc. Books and interwebs are filled with it.
I came to the realization that this somehow is not the case for functional concepts. Case in point: applicatives.
I am struggling to find practical use cases for applicatives. Almost all of the tutorials and books I have come across so far provide the examples of [] and Maybe. I expected applicatives to be more applicable than that, seeing all the attention they get in the FP community.
I think I understand the conceptual basis for applicatives (maybe I am wrong), and I have waited long for my moment of enlightenment. But it doesn't seem to be happening. Never while programming, have I had a moment when I would shout with a joy, "Eureka! I can use applicative here!" (except again, for [] and Maybe).
Can someone please guide me how applicatives can be used in a day-to-day programming? How do I start spotting the pattern? Thanks!
Applicatives are great when you've got a plain old function of several variables, and you have the arguments but they're wrapped up in some kind of context. For instance, you have the plain old concatenate function (++) but you want to apply it to 2 strings which were acquired through I/O. Then the fact that IO is an applicative functor comes to the rescue:
Prelude Control.Applicative> (++) <$> getLine <*> getLine
hi
there
"hithere"
Even though you explicitly asked for non-Maybe examples, it seems like a great use case to me, so I'll give an example. You have a regular function of several variables, but you don't know if you have all the values you need (some of them may have failed to compute, yielding Nothing). So essentially because you have "partial values", you want to turn your function into a partial function, which is undefined if any of its inputs is undefined. Then
Prelude Control.Applicative> (+) <$> Just 3 <*> Just 5
Just 8
but
Prelude Control.Applicative> (+) <$> Just 3 <*> Nothing
Nothing
which is exactly what you want.
The basic idea is that you're "lifting" a regular function into a context where it can be applied to as many arguments as you like. The extra power of Applicative over just a basic Functor is that it can lift functions of arbitrary arity, whereas fmap can only lift a unary function.
Since many applicatives are also monads, I feel there's really two sides to this question.
Why would I want to use the applicative interface instead of the monadic one when both are available?
This is mostly a matter of style. Although monads have the syntactic sugar of do-notation, using applicative style frequently leads to more compact code.
In this example, we have a type Foo and we want to construct random values of this type. Using the monad instance for IO, we might write
data Foo = Foo Int Double
randomFoo = do
x <- randomIO
y <- randomIO
return $ Foo x y
The applicative variant is quite a bit shorter.
randomFoo = Foo <$> randomIO <*> randomIO
Of course, we could use liftM2 to get similar brevity, however the applicative style is neater than having to rely on arity-specific lifting functions.
In practice, I mostly find myself using applicatives much in the same way like I use point-free style: To avoid naming intermediate values when an operation is more clearly expressed as a composition of other operations.
Why would I want to use an applicative that is not a monad?
Since applicatives are more restricted than monads, this means that you can extract more useful static information about them.
An example of this is applicative parsers. Whereas monadic parsers support sequential composition using (>>=) :: Monad m => m a -> (a -> m b) -> m b, applicative parsers only use (<*>) :: Applicative f => f (a -> b) -> f a -> f b. The types make the difference obvious: In monadic parsers the grammar can change depending on the input, whereas in an applicative parser the grammar is fixed.
By limiting the interface in this way, we can for example determine whether a parser will accept the empty string without running it. We can also determine the first and follow sets, which can be used for optimization, or, as I've been playing with recently, constructing parsers that support better error recovery.
I think of Functor, Applicative and Monad as design patterns.
Imagine you want to write a Future[T] class. That is, a class that holds values that are to be calculated.
In a Java mindset, you might create it like
trait Future[T] {
def get: T
}
Where 'get' blocks until the value is available.
You might realize this, and rewrite it to take a callback:
trait Future[T] {
def foreach(f: T => Unit): Unit
}
But then what happens if there are two uses for the future? It means you need to keep a list of callbacks. Also, what happens if a method receives a Future[Int] and needs to return a calculation based on the Int inside? Or what do you do if you have two futures and you need to calculate something based on the values they will provide?
But if you know of FP concepts, you know that instead of working directly on T, you can manipulate the Future instance.
trait Future[T] {
def map[U](f: T => U): Future[U]
}
Now your application changes so that each time you need to work on the contained value, you just return a new Future.
Once you start in this path, you can't stop there. You realize that in order to manipulate two futures, you just need to model as an applicative, in order to create futures, you need a monad definition for future, etc.
UPDATE: As suggested by #Eric, I've written a blog post: http://www.tikalk.com/incubator/blog/functional-programming-scala-rest-us
I finally understood how applicatives can help in day-to-day programming with that presentation:
https://web.archive.org/web/20100818221025/http://applicative-errors-scala.googlecode.com/svn/artifacts/0.6/chunk-html/index.html
The autor shows how applicatives can help for combining validations and handling failures.
The presentation is in Scala, but the author also provides the full code example for Haskell, Java and C#.
Warning: my answer is rather preachy/apologetic. So sue me.
Well, how often in your day-to-day Haskell programming do you create new data types? Sounds like you want to know when to make your own Applicative instance, and in all honesty unless you are rolling your own parser, you probably won't need to do it very much. Using applicative instances, on the other hand, you should learn to do frequently.
Applicative is not a "design pattern" like decorators or strategies. It is an abstraction, which makes it much more pervasive and generally useful, but much less tangible. The reason you have a hard time finding "practical uses" is because the example uses for it are almost too simple. You use decorators to put scrollbars on windows. You use strategies to unify the interface for both aggressive and defensive moves for your chess bot. But what are applicatives for? Well, they're a lot more generalized, so it's hard to say what they are for, and that's OK. Applicatives are handy as parsing combinators; the Yesod web framework uses Applicative to help set up and extract information from forms. If you look, you'll find a million and one uses for Applicative; it's all over the place. But since it's so abstract, you just need to get the feel for it in order to recognize the many places where it can help make your life easier.
I think Applicatives ease the general usage of monadic code. How many times have you had the situation that you wanted to apply a function but the function was not monadic and the value you want to apply it to is monadic? For me: quite a lot of times!
Here is an example that I just wrote yesterday:
ghci> import Data.Time.Clock
ghci> import Data.Time.Calendar
ghci> getCurrentTime >>= return . toGregorian . utctDay
in comparison to this using Applicative:
ghci> import Control.Applicative
ghci> toGregorian . utctDay <$> getCurrentTime
This form looks "more natural" (at least to my eyes :)
Coming at Applicative from "Functor" it generalizes "fmap" to easily express acting on several arguments (liftA2) or a sequence of arguments (using <*>).
Coming at Applicative from "Monad" it does not let the computation depend on the value that is computed. Specifically you cannot pattern match and branch on a returned value, typically all you can do is pass it to another constructor or function.
Thus I see Applicative as sandwiched in between Functor and Monad. Recognizing when you are not branching on the values from a monadic computation is one way to see when to switch to Applicative.
Here is an example taken from the aeson package:
data Coord = Coord { x :: Double, y :: Double }
instance FromJSON Coord where
parseJSON (Object v) =
Coord <$>
v .: "x" <*>
v .: "y"
There are some ADTs like ZipList that can have applicative instances, but not monadic instances. This was a very helpful example for me when understanding the difference between applicatives and monads. Since so many applicatives are also monads, it's easy to not see the difference between the two without a concrete example like ZipList.
I think it might be worthwhile to browse the sources of packages on Hackage, and see first-handedly how applicative functors and the like are used in existing Haskell code.
I described an example of practical use of the applicative functor in a discussion, which I quote below.
Note the code examples are pseudo-code for my hypothetical language which would hide the type classes in a conceptual form of subtyping, so if you see a method call for apply just translate into your type class model, e.g. <*> in Scalaz or Haskell.
If we mark elements of an array or hashmap with null or none to
indicate their index or key is valid yet valueless, the Applicative
enables without any boilerplate skipping the valueless elements while
applying operations to the elements that have a value. And more
importantly it can automatically handle any Wrapped semantics that
are unknown a priori, i.e. operations on T over
Hashmap[Wrapped[T]] (any over any level of composition, e.g. Hashmap[Wrapped[Wrapped2[T]]] because applicative is composable but monad is not).
I can already picture how it will make my code easier to
understand. I can focus on the semantics, not on all the
cruft to get me there and my semantics will be open under extension of
Wrapped whereas all your example code isn’t.
Significantly, I forgot to point out before that your prior examples
do not emulate the return value of the Applicative, which will be a
List, not a Nullable, Option, or Maybe. So even my attempts to
repair your examples were not emulating Applicative.apply.
Remember the functionToApply is the input to the
Applicative.apply, so the container maintains control.
list1.apply( list2.apply( ... listN.apply( List.lift(functionToApply) ) ... ) )
Equivalently.
list1.apply( list2.apply( ... listN.map(functionToApply) ... ) )
And my proposed syntactical sugar which the compiler would translate
to the above.
funcToApply(list1, list2, ... list N)
It is useful to read that interactive discussion, because I can't copy it all here. I expect that url to not break, given who the owner of that blog is. For example, I quote from further down the discussion.
the conflation of out-of-statement control flow with assignment is probably not desired by most programmers
Applicative.apply is for generalizing the partial application of functions to parameterized types (a.k.a. generics) at any level of nesting (composition) of the type parameter. This is all about making more generalized composition possible. The generality can’t be accomplished by pulling it outside the completed evaluation (i.e. return value) of the function, analogous to the onion can’t be peeled from the inside-out.
Thus it isn’t conflation, it is a new degree-of-freedom that is not currently available to you. Per our discussion up thread, this is why you must throw exceptions or stored them in a global variable, because your language doesn’t have this degree-of-freedom. And that is not the only application of these category theory functors (expounded in my comment in moderator queue).
I provided a link to an example abstracting validation in Scala, F#, and C#, which is currently stuck in moderator queue. Compare the obnoxious C# version of the code. And the reason is because the C# is not generalized. I intuitively expect that C# case-specific boilerplate will explode geometrically as the program grows.

The case for point free style in Scala

This may seem really obvious to the FP cognoscenti here, but what is point free style in Scala good for? What would really sell me on the topic is an illustration that shows how point free style is significantly better in some dimension (e.g. performance, elegance, extensibility, maintainability) than code solving the same problem in non-point free style.
Quite simply, it's about being able to avoid specifying a name where none is needed, consider a trivial example:
List("a","b","c") foreach println
In this case, foreach is looking to accept String => Unit, a function that accepts a String and returns Unit (essentially, that there's no usable return and it works purely through side effect)
There's no need to bind a name here to each String instance that's passed to println. Arguably, it just makes the code more verbose to do so:
List("a","b","c") foreach {println(_)}
Or even
List("a","b","c") foreach {s => println(s)}
Personally, when I see code that isn't written in point-free style, I take it as an indicator that the bound name may be used twice, or that it has some significance in documenting the code. Likewise, I see point-free style as a sign that I can reason about the code more simply.
One appeal of point-free style in general is that without a bunch of "points" (values as opposed to functions) floating around, which must be repeated in several places to thread them through the computation, there are fewer opportunities to make a mistake, e.g. when typing a variable's name.
However, the advantages of point-free are quickly counterbalanced in Scala by its meagre ability to infer types, a fact which is exacerbated by point-free code because "points" serve as clues to the type inferencer. In Haskell, with its almost-complete type inferencing, this is usually not an issue.
I see no other advantage than "elegance": It's a little bit shorter, and may be more readable. It allows to reason about functions as entities, without going mentally a "level deeper" to function application, but of course you need getting used to it first.
I don't know any example where performance improves by using it (maybe it gets worse in cases where you end up with a function when a method would be sufficient).
Scala's point-free syntax is part of the magic Scala operators-which-are-really-functions. Even the most basic operators are functions:
For example:
val x = 1
val y = x + 1
...is the same as...
val x = 1
val y = x.+(1)
...but of course, the point-free style reads more naturally (the plus appears to be an operator).

why use foldLeft instead of procedural version?

So in reading this question it was pointed out that instead of the procedural code:
def expand(exp: String, replacements: Traversable[(String, String)]): String = {
var result = exp
for ((oldS, newS) <- replacements)
result = result.replace(oldS, newS)
result
}
You could write the following functional code:
def expand(exp: String, replacements: Traversable[(String, String)]): String = {
replacements.foldLeft(exp){
case (result, (oldS, newS)) => result.replace(oldS, newS)
}
}
I would almost certainly write the first version because coders familiar with either procedural or functional styles can easily read and understand it, while only coders familiar with functional style can easily read and understand the second version.
But setting readability aside for the moment, is there something that makes foldLeft a better choice than the procedural version? I might have thought it would be more efficient, but it turns out that the implementation of foldLeft is actually just the procedural code above. So is it just a style choice, or is there a good reason to use one version or the other?
Edit: Just to be clear, I'm not asking about other functions, just foldLeft. I'm perfectly happy with the use of foreach, map, filter, etc. which all map nicely onto for-comprehensions.
Answer: There are really two good answers here (provided by delnan and Dave Griffith) even though I could only accept one:
Use foldLeft because there are additional optimizations, e.g. using a while loop which will be faster than a for loop.
Use fold if it ever gets added to regular collections, because that will make the transition to parallel collections trivial.
It's shorter and clearer - yes, you need to know what a fold is to understand it, but when you're programming in a language that's 50% functional, you should know these basic building blocks anyway. A fold is exactly what the procedural code does (repeatedly applying an operation), but it's given a name and generalized. And while it's only a small wheel you're reinventing, but it's still a wheel reinvention.
And in case the implementation of foldLeft should ever get some special perk - say, extra optimizations - you get that for free, without updating countless methods.
Other than a distaste for mutable variable (even mutable locals), the basic reason to use fold in this case is clarity, with occasional brevity. Most of the wordiness of the fold version is because you have to use an explicit function definition with a destructuring bind. If each element in the list is used precisely once in the fold operation (a common case), this can be simplified to use the short form. Thus the classic definition of the sum of a collection of numbers
collection.foldLeft(0)(_+_)
is much simpler and shorter than any equivalent imperative construct.
One additional meta-reason to use functional collection operations, although not directly applicable in this case, is to enable a move to using parallel collection operations if needed for performance. Fold can't be parallelized, but often fold operations can be turned into commutative-associative reduce operations, and those can be parallelized. With Scala 2.9, changing something from non-parallel functional to parallel functional utilizing multiple processing cores can sometimes be as easy as dropping a .par onto the collection you want to execute parallel operations on.
One word I haven't seen mentioned here yet is declarative:
Declarative programming is often defined as any style of programming that is not imperative. A number of other common definitions exist that attempt to give the term a definition other than simply contrasting it with imperative programming. For example:
A program that describes what computation should be performed and not how to compute it
Any programming language that lacks side effects (or more specifically, is referentially transparent)
A language with a clear correspondence to mathematical logic.
These definitions overlap substantially.
Higher-order functions (HOFs) are a key enabler of declarativity, since we only specify the what (e.g. "using this collection of values, multiply each value by 2, sum the result") and not the how (e.g. initialize an accumulator, iterate with a for loop, extract values from the collection, add to the accumulator...).
Compare the following:
// Sugar-free Scala (Still better than Java<5)
def sumDoubled1(xs: List[Int]) = {
var sum = 0 // Initialized correctly?
for (i <- 0 until xs.size) { // Fenceposts?
sum = sum + (xs(i) * 2) // Correct value being extracted?
// Value extraction and +/* smashed together
}
sum // Correct value returned?
}
// Iteration sugar (similar to Java 5)
def sumDoubled2(xs: List[Int]) = {
var sum = 0
for (x <- xs) // We don't need to worry about fenceposts or
sum = sum + (x * 2) // value extraction anymore; that's progress
sum
}
// Verbose Scala
def sumDoubled3(xs: List[Int]) = xs.map((x: Int) => x*2). // the doubling
reduceLeft((x: Int, y: Int) => x+y) // the addition
// Idiomatic Scala
def sumDoubled4(xs: List[Int]) = xs.map(_*2).reduceLeft(_+_)
// ^ the doubling ^
// \ the addition
Note that our first example, sumDoubled1, is already more declarative than (most would say superior to) C/C++/Java<5 for loops, because we haven't had to micromanage the iteration state and termination logic, but we're still vulnerable to off-by-one errors.
Next, in sumDoubled2, we're basically at the level of Java>=5. There are still a couple things that can go wrong, but we're getting pretty good at reading this code-shape, so errors are quite unlikely. However, don't forget that a pattern that's trivial in a toy example isn't always so readable when scaled up to production code!
With sumDoubled3, desugared for didactic purposes, and sumDoubled4, the idiomatic Scala version, the iteration, initialization, value extraction and choice of return value are all gone.
Sure, it takes time to learn to read the functional versions, but we've drastically foreclosed our options for making mistakes. The "business logic" is clearly marked, and the plumbing is chosen from the same menu that everyone else is reading from.
It is worth pointing out that there is another way of calling foldLeft which takes advantages of:
The ability to use (almost) any Unicode symbol in an identifier
The feature that if a method name ends with a colon :, and is called infix, then the target and parameter are switched
For me this version is much clearer, because I can see that I am folding the expr value into the replacements collection
def expand(expr: String, replacements: Traversable[(String, String)]): String = {
(expr /: replacements) { case (r, (o, n)) => r.replace(o, n) }
}

Scala's for-comprehensions: vital feature or syntactic sugar?

When I first started looking at Scala, I liked the look of for-comprehensions. They seemed to be a bit like the foreach loops I was used to from Java 5, but with functional restrictions and a lot of sweet syntactic niceness.
But as I've absorbed the Scala style, I find that every time I could use a for-comprension I'm using map, flatMap, filter, reduce and foreach instead. The intention of the code seems clearer to me that way, with fewer potential hidden surprises, and they're usually shorter code too.
As far as I'm aware, for-comprehensions are always compiled down into these methods anyway, so I'm wondering: what are they actually for? Am I missing some functional revalation (it wouldn't be the first time)? Do for-comprehensions do something the other features can't, or would at least be much clumsier at? Do they shine under a particular use case? Is it really just a matter of personal taste?
Please refer to this question. The short answer is that for-comprehensions can be more readable. In particular, if you have many nested generators, the actual scope of what you are doing becomes more clear, and you don't need huge indents.
Another great use of for-comprehension is for internal DSL. ScalaQL is a great example of this. It can turn this
val underAge = for {
p <- Person
c <- Company
if p.company is c
if p.age < 14
} yield p
into this
SELECT p.* FROM people p JOIN companies c ON p.company_id = c.id WHERE p.age < 14
and a whole lot more.
for-comprehensions are syntactic sugar, but that doesn't mean they aren't vital. They are usually more concise than their expanded form, which is nice, but perhaps more importantly they help programmers from imperative languages use functional constructs.
When I first started with Scala I used for-comprehensions a lot, because they were familiar. Then I almost stopped completely, because I felt like using the underlying methods was more explicit and therefore clearer. Now I'm back to using for-comprehensions because I think they better express the intent of what I'm doing rather than the means of doing it.
In some cases, for comprehensions can express intent better, so when they do, use them.
Also note that with for comprehensions, you get pattern matching for free. For example, iterating over a Map is much simpler with for comprehension:
for ((key, value) <- map) println (key + "-->" + value)
than with foreach:
map foreach { case (key, value) => println (key + "-->" + value) }
You are right. The for-comprehension is syntactic sugar. I believe the underlying methods are more succinct and easier to read, once you are used to them.
Compare the following equivalent statements:
1. for (i <- 1 to 100; if (i % 3 == 0)) yield Math.pow(i, 2)
2. (1 to 100).filter(_ % 3 == 0).map(Math.pow(_, 2))
In my opinion, the addition of the semicolon in #1 distracts from the sense that this is a single chained statement. There is also a sense that i is a var (is it 1, or is it 99, or something inbetween?) which detracts from an otherwise functional style.
Option 2 is more evidently a chain of method calls on objects. Each link in the chain clearly stating its responsibility. There are no intermediate variables.
Perhaps for comprehensions are included as a convenience for developers transitioning from Java. Regardless, which is chosen is a matter of style and preference.