Related
I am in the process of learning Scala through Coursera course (progfun).
We are being learned to think functionally and use tail recursions when possible to implement functions/methods.
And as an example for foreach on a list function, we have taught to implement it like:
def foreach[T](list: List[T], f: [T] => Unit) {
if (!list.isEmpty) foreach(list.tail) else f(list.head)
}
Then I was surprised when I found the following implementation in some Scala apis:
override /*IterableLike*/
def foreach[B](f: A => B) {
var these = this
while (!these.isEmpty) {
f(these.head)
these = these.tail
}
}
So how come we are being learned to use recursion and avoid using mutable variables and the api is being implemented by opposite techniques?
Have a look at scala.collection.LinearSeqOptimized where scala.collection.immutable.List extend. (similar implementation found in the List class itself)
Don't forget that Scala is intended to be a multiparadigm language. For educational purposes, it's good to know how to read and write tail-call recursive functions. But when using the language day-to-day, it's important to remember that it's not pure FP.
It's possible that part of the library predated TCO and the #tailrec annotation. You'd have to look at commit history to find out.
That implementation of foreach might use a mutable var, but from the outside, it appears to be pure. Ultimately, this is exactly what TCO would do behind the scenes.
There are two parts to your question:
So how come we are being learned to use recursion and avoid using mutable variables
Because the teachers assume that you either already know about imperative programming with mutable state and loops or will be exposed to it sometime during your career anyway, so they would rather focus on teaching you the things you are less likely to pick up on your own.
Also, imperative programming with mutable state is much harder to reason about, much harder to understand and thus much harder to teach.
and the api is being implemented by opposite techniques?
Because the Scala standard library is intended to be a high-performance industrial-strength library, not a teaching example. Maybe the person who wrote that code profiled it and measured it to be 0.001% percent faster than the tail-recursive version. Maybe, when that code was written, the compiler couldn't yet reliably optimize the tail-recursive version.
Don't forget that Iterable and friends are the cornerstone of Scala's collections library, those methods you are looking at are probably among the most often executed methods in the entire Scala universe. Even the tiniest performance optimization pays out in a method that is executed billions of times.
I'm currently porting some code from traditional Scala to Scalaz style.
It's fairly common through most of my code to use the Seq trait in my exposed API signatures rather than a concrete type (i.e. List, Vector) directly. However, this poses some problem with Scalaz, since it doesn't provide an implementation of a Bind[Seq] typeclass.
i.e. This will work correctly.
List(1,2,3,4) >>= bindOperation
But this will not
Seq(1,2,3,4) >>= bindOperation
failing with the error could not find implicit value for parameter F0: scalaz.Bind[Seq]
I assume this is an intentional design decision in Scalaz - however am unsure about intended/best practice on how to precede.
Should I instead write my code directly to List/Vector as appropriate instead of using the more flexible Seq interface? Or should I simply define my own Bind[Seq] typeclass?
The collections library does backflips to accommodate subtyping: when you use map on a specific collection type (list, map, etc.), you'll (usually) get the same type back. It manages this through the use of an extremely complex inheritance hierarchy together with type classes like CanBuildFrom. It gets the job done (at least arguably), but the complexity doesn't feel very principled. It's a mess. Lots of people hate it.
The complexity is generally pretty easy to avoid as a library user, but for a library designer it's a nightmare. If I provide a monad instance for Seq, that means all of my users' types get bumped up the hierarchy to Seq every type they use a monadic operation.
Scalaz folks tend not to like subtyping very much, anyway, so for the most part Scalaz stays around the leaves of the hierarchy—List, Vector, etc. You can see some discussion of this decision on the mailing list, for example.
When I first started using Scalaz I wrote a lot of utility code that tried to provide instances for Seq, etc. and make them usable with CanBuildFrom. Then I stopped, and now I tend to follow Scalaz in only ever using List, Vector, Map, and Set in my own code. If you're committed to "Scalaz style", you should do that as well (or even adopt Scalaz's own IList, ISet, ==>>, etc.). You're not going to find clear agreement on best practices more generally, though, and both approaches can be made to work, so you'll just need to experiment to find which you prefer.
Scala code:
for {
user <- users
name <- user.names
letter <- name.letters
} yield letter
Can we consider such "list comprehension" code as "functional programming" style? Since they will be converted to map and flatMap functions?
Yes, it's definitely a functional technique, particularly assuming that all of those members are fields or pure functions. It's just syntactic sugar for 0 or more flatMaps followed by 1 map (with if clauses translated to withFilter).
Without the yield at the end, it acts more like the imperative for, translating to 1 or more foreachs; foreach typically being used for executing statements for their side effects.
This article describes the syntax in a bit more detail, this excellent answer talks about it in more depth with some of the monadic theory, and this article describes the actual rules translation explicitly.
The list comprehension construct is found quite often in functional programming languages but it is not distinctive of functional programming. If you think about that also Python, PHP (from version 5.5) and the next version of Javascript (ES6) have similar constructs but that doesn't mean that they are functional.
In the case of scala it is true that your example translate to map and flatMap applications but neither that is enough IMHO to say that it is functional. Consider the case:
for {
i <- 1 until 10
} println(i)
This is still a for comprehension but it is actually doing side-effects as any imperative language (this cycle actually translates to a foreach invocation).
The bottom-line in my opinion is that Functional Programming is not much about constructs as it is about about style: the real important thing for a piece of code to be in FP style is to be side-effect free (or, as in many cases, to be honest about when side-effects happen).
If you want you can do FP even in Java 7: use anonymous classes as closures, mark everything as final, avoid any mutable state and isolate side-effects into special constructs and you are done. It will be terribly verbose and probably ugly because the language doesn't support the kind of abstractions that help in making this style nice in practice, but it will nevertheless be in functional-style.
Can we consider such "list comprehension" code as "functional programming" style?
Monadic list comprehensions with imperative/generator syntax are a relatively new syntactic and semantic innovation.
The original list comprehensions (e.g. NPL or Miranda) were modelled on set comprehensions, and are clearly a declarative construct, albeit one that is translated to nested functions.
Haskell's list comprehensions function in a similar way.
Monadic comprehensions (compiled into monadic guard and binds) should surely be considered functional if we consider monads to be a functional construct.
I am a Scala programmer, learning Haskell now. It's easy to find practical use cases and real world examples for OO concepts, such as decorators, strategy pattern etc. Books and interwebs are filled with it.
I came to the realization that this somehow is not the case for functional concepts. Case in point: applicatives.
I am struggling to find practical use cases for applicatives. Almost all of the tutorials and books I have come across so far provide the examples of [] and Maybe. I expected applicatives to be more applicable than that, seeing all the attention they get in the FP community.
I think I understand the conceptual basis for applicatives (maybe I am wrong), and I have waited long for my moment of enlightenment. But it doesn't seem to be happening. Never while programming, have I had a moment when I would shout with a joy, "Eureka! I can use applicative here!" (except again, for [] and Maybe).
Can someone please guide me how applicatives can be used in a day-to-day programming? How do I start spotting the pattern? Thanks!
Applicatives are great when you've got a plain old function of several variables, and you have the arguments but they're wrapped up in some kind of context. For instance, you have the plain old concatenate function (++) but you want to apply it to 2 strings which were acquired through I/O. Then the fact that IO is an applicative functor comes to the rescue:
Prelude Control.Applicative> (++) <$> getLine <*> getLine
hi
there
"hithere"
Even though you explicitly asked for non-Maybe examples, it seems like a great use case to me, so I'll give an example. You have a regular function of several variables, but you don't know if you have all the values you need (some of them may have failed to compute, yielding Nothing). So essentially because you have "partial values", you want to turn your function into a partial function, which is undefined if any of its inputs is undefined. Then
Prelude Control.Applicative> (+) <$> Just 3 <*> Just 5
Just 8
but
Prelude Control.Applicative> (+) <$> Just 3 <*> Nothing
Nothing
which is exactly what you want.
The basic idea is that you're "lifting" a regular function into a context where it can be applied to as many arguments as you like. The extra power of Applicative over just a basic Functor is that it can lift functions of arbitrary arity, whereas fmap can only lift a unary function.
Since many applicatives are also monads, I feel there's really two sides to this question.
Why would I want to use the applicative interface instead of the monadic one when both are available?
This is mostly a matter of style. Although monads have the syntactic sugar of do-notation, using applicative style frequently leads to more compact code.
In this example, we have a type Foo and we want to construct random values of this type. Using the monad instance for IO, we might write
data Foo = Foo Int Double
randomFoo = do
x <- randomIO
y <- randomIO
return $ Foo x y
The applicative variant is quite a bit shorter.
randomFoo = Foo <$> randomIO <*> randomIO
Of course, we could use liftM2 to get similar brevity, however the applicative style is neater than having to rely on arity-specific lifting functions.
In practice, I mostly find myself using applicatives much in the same way like I use point-free style: To avoid naming intermediate values when an operation is more clearly expressed as a composition of other operations.
Why would I want to use an applicative that is not a monad?
Since applicatives are more restricted than monads, this means that you can extract more useful static information about them.
An example of this is applicative parsers. Whereas monadic parsers support sequential composition using (>>=) :: Monad m => m a -> (a -> m b) -> m b, applicative parsers only use (<*>) :: Applicative f => f (a -> b) -> f a -> f b. The types make the difference obvious: In monadic parsers the grammar can change depending on the input, whereas in an applicative parser the grammar is fixed.
By limiting the interface in this way, we can for example determine whether a parser will accept the empty string without running it. We can also determine the first and follow sets, which can be used for optimization, or, as I've been playing with recently, constructing parsers that support better error recovery.
I think of Functor, Applicative and Monad as design patterns.
Imagine you want to write a Future[T] class. That is, a class that holds values that are to be calculated.
In a Java mindset, you might create it like
trait Future[T] {
def get: T
}
Where 'get' blocks until the value is available.
You might realize this, and rewrite it to take a callback:
trait Future[T] {
def foreach(f: T => Unit): Unit
}
But then what happens if there are two uses for the future? It means you need to keep a list of callbacks. Also, what happens if a method receives a Future[Int] and needs to return a calculation based on the Int inside? Or what do you do if you have two futures and you need to calculate something based on the values they will provide?
But if you know of FP concepts, you know that instead of working directly on T, you can manipulate the Future instance.
trait Future[T] {
def map[U](f: T => U): Future[U]
}
Now your application changes so that each time you need to work on the contained value, you just return a new Future.
Once you start in this path, you can't stop there. You realize that in order to manipulate two futures, you just need to model as an applicative, in order to create futures, you need a monad definition for future, etc.
UPDATE: As suggested by #Eric, I've written a blog post: http://www.tikalk.com/incubator/blog/functional-programming-scala-rest-us
I finally understood how applicatives can help in day-to-day programming with that presentation:
https://web.archive.org/web/20100818221025/http://applicative-errors-scala.googlecode.com/svn/artifacts/0.6/chunk-html/index.html
The autor shows how applicatives can help for combining validations and handling failures.
The presentation is in Scala, but the author also provides the full code example for Haskell, Java and C#.
Warning: my answer is rather preachy/apologetic. So sue me.
Well, how often in your day-to-day Haskell programming do you create new data types? Sounds like you want to know when to make your own Applicative instance, and in all honesty unless you are rolling your own parser, you probably won't need to do it very much. Using applicative instances, on the other hand, you should learn to do frequently.
Applicative is not a "design pattern" like decorators or strategies. It is an abstraction, which makes it much more pervasive and generally useful, but much less tangible. The reason you have a hard time finding "practical uses" is because the example uses for it are almost too simple. You use decorators to put scrollbars on windows. You use strategies to unify the interface for both aggressive and defensive moves for your chess bot. But what are applicatives for? Well, they're a lot more generalized, so it's hard to say what they are for, and that's OK. Applicatives are handy as parsing combinators; the Yesod web framework uses Applicative to help set up and extract information from forms. If you look, you'll find a million and one uses for Applicative; it's all over the place. But since it's so abstract, you just need to get the feel for it in order to recognize the many places where it can help make your life easier.
I think Applicatives ease the general usage of monadic code. How many times have you had the situation that you wanted to apply a function but the function was not monadic and the value you want to apply it to is monadic? For me: quite a lot of times!
Here is an example that I just wrote yesterday:
ghci> import Data.Time.Clock
ghci> import Data.Time.Calendar
ghci> getCurrentTime >>= return . toGregorian . utctDay
in comparison to this using Applicative:
ghci> import Control.Applicative
ghci> toGregorian . utctDay <$> getCurrentTime
This form looks "more natural" (at least to my eyes :)
Coming at Applicative from "Functor" it generalizes "fmap" to easily express acting on several arguments (liftA2) or a sequence of arguments (using <*>).
Coming at Applicative from "Monad" it does not let the computation depend on the value that is computed. Specifically you cannot pattern match and branch on a returned value, typically all you can do is pass it to another constructor or function.
Thus I see Applicative as sandwiched in between Functor and Monad. Recognizing when you are not branching on the values from a monadic computation is one way to see when to switch to Applicative.
Here is an example taken from the aeson package:
data Coord = Coord { x :: Double, y :: Double }
instance FromJSON Coord where
parseJSON (Object v) =
Coord <$>
v .: "x" <*>
v .: "y"
There are some ADTs like ZipList that can have applicative instances, but not monadic instances. This was a very helpful example for me when understanding the difference between applicatives and monads. Since so many applicatives are also monads, it's easy to not see the difference between the two without a concrete example like ZipList.
I think it might be worthwhile to browse the sources of packages on Hackage, and see first-handedly how applicative functors and the like are used in existing Haskell code.
I described an example of practical use of the applicative functor in a discussion, which I quote below.
Note the code examples are pseudo-code for my hypothetical language which would hide the type classes in a conceptual form of subtyping, so if you see a method call for apply just translate into your type class model, e.g. <*> in Scalaz or Haskell.
If we mark elements of an array or hashmap with null or none to
indicate their index or key is valid yet valueless, the Applicative
enables without any boilerplate skipping the valueless elements while
applying operations to the elements that have a value. And more
importantly it can automatically handle any Wrapped semantics that
are unknown a priori, i.e. operations on T over
Hashmap[Wrapped[T]] (any over any level of composition, e.g. Hashmap[Wrapped[Wrapped2[T]]] because applicative is composable but monad is not).
I can already picture how it will make my code easier to
understand. I can focus on the semantics, not on all the
cruft to get me there and my semantics will be open under extension of
Wrapped whereas all your example code isn’t.
Significantly, I forgot to point out before that your prior examples
do not emulate the return value of the Applicative, which will be a
List, not a Nullable, Option, or Maybe. So even my attempts to
repair your examples were not emulating Applicative.apply.
Remember the functionToApply is the input to the
Applicative.apply, so the container maintains control.
list1.apply( list2.apply( ... listN.apply( List.lift(functionToApply) ) ... ) )
Equivalently.
list1.apply( list2.apply( ... listN.map(functionToApply) ... ) )
And my proposed syntactical sugar which the compiler would translate
to the above.
funcToApply(list1, list2, ... list N)
It is useful to read that interactive discussion, because I can't copy it all here. I expect that url to not break, given who the owner of that blog is. For example, I quote from further down the discussion.
the conflation of out-of-statement control flow with assignment is probably not desired by most programmers
Applicative.apply is for generalizing the partial application of functions to parameterized types (a.k.a. generics) at any level of nesting (composition) of the type parameter. This is all about making more generalized composition possible. The generality can’t be accomplished by pulling it outside the completed evaluation (i.e. return value) of the function, analogous to the onion can’t be peeled from the inside-out.
Thus it isn’t conflation, it is a new degree-of-freedom that is not currently available to you. Per our discussion up thread, this is why you must throw exceptions or stored them in a global variable, because your language doesn’t have this degree-of-freedom. And that is not the only application of these category theory functors (expounded in my comment in moderator queue).
I provided a link to an example abstracting validation in Scala, F#, and C#, which is currently stuck in moderator queue. Compare the obnoxious C# version of the code. And the reason is because the C# is not generalized. I intuitively expect that C# case-specific boilerplate will explode geometrically as the program grows.
What are the key differences between the approaches taken by Scala and F# to unify OO and FP paradigms?
EDIT
What are the relative merits and demerits of each approach? If, in spite of the support for subtyping, F# can infer the types of function arguments then why can't Scala?
I have looked at F#, doing low level tutorials, so my knowledge of it is very limited. However, it was apparent to me that its style was essentially functional, with OO being more like an add on -- much more of an ADT + module system than true OO. The feeling I get can be best described as if all methods in it were static (as in Java static).
See, for instance, any code using the pipe operator (|>). Take this snippet from the wikipedia entry on F#:
[1 .. 10]
|> List.map fib
(* equivalent without the pipe operator *)
List.map fib [1 .. 10]
The function map is not a method of the list instance. Instead, it works like a static method on a List module which takes a list instance as one of its parameters.
Scala, on the other hand, is fully OO. Let's start, first, with the Scala equivalent of that code:
List(1 to 10) map fib
// Without operator notation or implicits:
List.apply(Predef.intWrapper(1).to(10)).map(fib)
Here, map is a method on the instance of List. Static-like methods, such as intWrapper on Predef or apply on List, are much more uncommon. Then there are functions, such as fib above. Here, fib is not a method on int, but neither it is a static method. Instead, it is an object -- the second main difference I see between F# and Scala.
Let's consider the F# implementation from the Wikipedia, and an equivalent Scala implementation:
// F#, from the wiki
let rec fib n =
match n with
| 0 | 1 -> n
| _ -> fib (n - 1) + fib (n - 2)
// Scala equivalent
def fib(n: Int): Int = n match {
case 0 | 1 => n
case _ => fib(n - 1) + fib(n - 2)
}
The above Scala implementation is a method, but Scala converts that into a function to be able to pass it to map. I'll modify it below so that it becomes a method that returns a function instead, to show how functions work in Scala.
// F#, returning a lambda, as suggested in the comments
let rec fib = function
| 0 | 1 as n -> n
| n -> fib (n - 1) + fib (n - 2)
// Scala method returning a function
def fib: Int => Int = {
case n # (0 | 1) => n
case n => fib(n - 1) + fib(n - 2)
}
// Same thing without syntactic sugar:
def fib = new Function1[Int, Int] {
def apply(param0: Int): Int = param0 match {
case n # (0 | 1) => n
case n => fib.apply(n - 1) + fib.apply(n - 2)
}
}
So, in Scala, all functions are objects implementing the trait FunctionX, which defines a method called apply. As shown here and in the list creation above, .apply can be omitted, which makes function calls look just like method calls.
In the end, everything in Scala is an object -- and instance of a class -- and every such object does belong to a class, and all code belong to a method, which gets executed somehow. Even match in the example above used to be a method, but has been converted into a keyword to avoid some problems quite a while ago.
So, how about the functional part of it? F# belongs to one of the most traditional families of functional languages. While it doesn't have some features some people think are important for functional languages, the fact is that F# is function by default, so to speak.
Scala, on the other hand, was created with the intent of unifying functional and OO models, instead of just providing them as separate parts of the language. The extent to which it was succesful depends on what you deem to be functional programming. Here are some of the things that were focused on by Martin Odersky:
Functions are values. They are objects too -- because all values are objects in Scala -- but the concept that a function is a value that can be manipulated is an important one, with its roots all the way back to the original Lisp implementation.
Strong support for immutable data types. Functional programming has always been concerned with decreasing the side effects on a program, that functions can be analysed as true mathematical functions. So Scala made it easy to make things immutable, but it did not do two things which FP purists criticize it for:
It did not make mutability harder.
It does not provide an effect system, by which mutability can be statically tracked.
Support for Algebraic Data Types. Algebraic data types (called ADT, which confusingly also stands for Abstract Data Type, a different thing) are very common in functional programming, and are most useful in situations where one commonly use the visitor pattern in OO languages.
As with everything else, ADTs in Scala are implemented as classes and methods, with some syntactic sugars to make them painless to use. However, Scala is much more verbose than F# (or other functional languages, for that matter) in supporting them. For example, instead of F#'s | for case statements, it uses case.
Support for non-strictness. Non-strictness means only computing stuff on demand. It is an essential aspect of Haskell, where it is tightly integrated with the side effect system. In Scala, however, non-strictness support is quite timid and incipient. It is available and used, but in a restricted manner.
For instance, Scala's non-strict list, the Stream, does not support a truly non-strict foldRight, such as Haskell does. Furthermore, some benefits of non-strictness are only gained when it is the default in the language, instead of an option.
Support for list comprehension. Actually, Scala calls it for-comprehension, as the way it is implemented is completely divorced from lists. In its simplest terms, list comprehensions can be thought of as the map function/method shown in the example, though nesting of map statements (supports with flatMap in Scala) as well as filtering (filter or withFilter in Scala, depending on strictness requirements) are usually expected.
This is a very common operation in functional languages, and often light in syntax -- like in Python's in operator. Again, Scala is somewhat more verbose than usual.
In my opinion, Scala is unparalled in combining FP and OO. It comes from the OO side of the spectrum towards the FP side, which is unusual. Mostly, I see FP languages with OO tackled on it -- and it feels tackled on it to me. I guess FP on Scala probably feels the same way for functional languages programmers.
EDIT
Reading some other answers I realized there was another important topic: type inference. Lisp was a dynamically typed language, and that pretty much set the expectations for functional languages. The modern statically typed functional languages all have strong type inference systems, most often the Hindley-Milner1 algorithm, which makes type declarations essentially optional.
Scala can't use the Hindley-Milner algorithm because of Scala's support for inheritance2. So Scala has to adopt a much less powerful type inference algorithm -- in fact, type inference in Scala is intentionally undefined in the specification, and subject of on-going improvements (it's improvement is one of the biggest features of the upcoming 2.8 version of Scala, for instance).
In the end, however, Scala requires all parameters to have their types declared when defining methods. In some situations, such as recursion, return types for methods also have to be declared.
Functions in Scala can often have their types inferred instead of declared, though. For instance, no type declaration is necessary here: List(1, 2, 3) reduceLeft (_ + _), where _ + _ is actually an anonymous function of type Function2[Int, Int, Int].
Likewise, type declaration of variables is often unnecessary, but inheritance may require it. For instance, Some(2) and None have a common superclass Option, but actually belong to different subclases. So one would usually declare var o: Option[Int] = None to make sure the correct type is assigned.
This limited form of type inference is much better than statically typed OO languages usually offer, which gives Scala a sense of lightness, and much worse than statically typed FP languages usually offer, which gives Scala a sense of heavyness. :-)
Notes:
Actually, the algorithm originates from Damas and Milner, who called it "Algorithm W", according to the wikipedia.
Martin Odersky mentioned in a comment here that:
The reason Scala does not have Hindley/Milner type inference is
that it is very difficult to combine with features such as
overloading (the ad-hoc variant, not type classes), record
selection, and subtyping
He goes on to state that it may not be actually impossible, and it came down to a trade-off. Please do go to that link for more information, and, if you do come up with a clearer statement or, better yet, some paper one way or another, I'd be grateful for the reference.
Let me thank Jon Harrop for looking this up, as I was assuming it was impossible. Well, maybe it is, and I couldn't find a proper link. Note, however, that it is not inheritance alone causing the problem.
F# is functional - It allows OO pretty well, but the design and philosophy is functional nevertheless. Examples:
Haskell-style functions
Automatic currying
Automatic generics
Type inference for arguments
It feels relatively clumsy to use F# in a mainly object-oriented way, so one could describe the main goal as to integrate OO into functional programming.
Scala is multi-paradigm with focus on flexibility. You can choose between authentic FP, OOP and procedural style depending on what currently fits best. It's really about unifying OO and functional programming.
There are quite a few points that you can use for comparing the two (or three). First, here are some notable points that I can think of:
Syntax
Syntactically, F# and OCaml are based on the functional programming tradition (space separated and more lightweight), while Scala is based on the object-oriented style (although Scala makes it more lightweight).
Integrating OO and FP
Both F# and Scala very smoothly integrate OO with FP (because there is no contradiction between these two!!) You can declare classes to hold immutable data (functional aspect) and provide members related to working with the data, you can also use interfaces for abstraction (object-oriented aspects). I'm not as familiar with OCaml, but I would think that it puts more emphasis on the OO side (compared to F#)
Programming style in F#
I think that the usual programming style used in F# (if you don't need to write .NET library and don't have other limitations) is probably more functional and you'd use OO features only when you need to. This means that you group functionality using functions, modules and algebraic data types.
Programming style in Scala
In Scala, the default programming style is more object-oriented (in the organization), however you still (probably) write functional programs, because the "standard" approach is to write code that avoids mutation.
What are the key differences between the approaches taken by Scala and F# to unify OO and FP paradigms?
The key difference is that Scala tries to blend the paradigms by making sacrifices (usually on the FP side) whereas F# (and OCaml) generally draw a line between the paradigms and let the programmer choose between them for each task.
Scala had to make sacrifices in order to unify the paradigms. For example:
First-class functions are an essential feature of any functional language (ML, Scheme and Haskell). All functions are first-class in F#. Member functions are second-class in Scala.
Overloading and subtypes impede type inference. F# provides a large sublanguage that sacrifices these OO features in order to provide powerful type inference when these features are not used (requiring type annotations when they are used). Scala pushes these features everywhere in order to maintain consistent OO at the cost of poor type inference everywhere.
Another consequence of this is that F# is based upon tried and tested ideas whereas Scala is pioneering in this respect. This is ideal for the motivations behind the projects: F# is a commercial product and Scala is programming language research.
As an aside, Scala also sacrificed other core features of FP such as tail-call optimization for pragmatic reasons due to limitations of their VM of choice (the JVM). This also makes Scala much more OOP than FP. Note that there is a project to bring Scala to .NET that will use the CLR to do genuine TCO.
What are the relative merits and demerits of each approach? If, in spite of the support for subtyping, F# can infer the types of function arguments then why can't Scala?
Type inference is at odds with OO-centric features like overloading and subtypes. F# chose type inference over consistency with respect to overloading. Scala chose ubiquitous overloading and subtypes over type inference. This makes F# more like OCaml and Scala more like C#. In particular, Scala is no more a functional programming language than C# is.
Which is better is entirely subjective, of course, but I personally much prefer the tremendous brevity and clarity that comes from powerful type inference in the general case. OCaml is a wonderful language but one pain point was the lack of operator overloading that required programmers to use + for ints, +. for floats, +/ for rationals and so on. Once again, F# chooses pragmatism over obsession by sacrificing type inference for overloading specifically in the context of numerics, not only on arithmetic operators but also on arithmetic functions such as sin. Every corner of the F# language is the result of carefully chosen pragmatic trade-offs like this. Despite the resulting inconsistencies, I believe this makes F# far more useful.
From this article on Programming Languages:
Scala is a rugged, expressive,
strictly superior replacement for
Java. Scala is the programming
language I would use for a task like
writing a web server or an IRC client.
In contrast to OCaml [or F#], which was a
functional language with an
object-oriented system grafted to it,
Scala feels more like an true hybrid
of object-oriented and functional
programming. (That is, object-oriented
programmers should be able to start
using Scala immediately, picking up
the functional parts only as they
choose to.)
I first learned about Scala at POPL
2006 when Martin Odersky gave an
invited talk on it. At the time I saw
functional programming as strictly
superior to object-oriented
programming, so I didn't see a need
for a language that fused functional
and object-oriented programming. (That
was probably because all I wrote back
then were compilers, interpreters and
static analyzers.)
The need for Scala didn't become
apparent to me until I wrote a
concurrent HTTPD from scratch to
support long-polled AJAX for yaplet.
In order to get good multicore
support, I wrote the first version in
Java. As a language, I don't think
Java is all that bad, and I can enjoy
well-done object-oriented programming.
As a functional programmer, however,
the lack of (or needlessly verbose)
support of functional programming
features (like higher-order functions)
grates on me when I program in Java.
So, I gave Scala a chance.
Scala runs on the JVM, so I could
gradually port my existing project
into Scala. It also means that Scala,
in addition to its own rather large
library, has access to the entire Java
library as well. This means you can
get real work done in Scala.
As I started using Scala, I became
impressed by how cleverly the
functional and object-oriented worlds
blended together. In particular, Scala
has a powerful case
class/pattern-matching system that
addressed pet peeves lingering from my
experiences with Standard ML, OCaml
and Haskell: the programmer can decide
which fields of an object should be
matchable (as opposed to being forced
to match on all of them), and
variable-arity arguments are
permitted. In fact, Scala even allows
programmer-defined patterns. I write a
lot of functions that operate on
abstract syntax nodes, and it's nice
to be able to match on only the
syntactic children, but still have
fields for things such as annotations
or lines in the original program. The
case class system lets one split the
definition of an algebraic data type
across multiple files or across
multiple parts of the same file, which
is remarkably handy.
Scala also
supports well-defined multiple
inheritance through class-like devices
called traits.
Scala also allows a
considerable degree of overloading;
even function application and array
update can be overloaded. In my
experience, this tends to make my
Scala programs more intuitive and
concise.
One feature that turns out to save a
lot of code, in the same way that type
classes save code in Haskell, is
implicits. You can imagine implicits
as an API for the error-recovery phase
of the type-checker. In short, when
the type checker needs an X but got a
Y, it will check to see if there's an
implicit function in scope that
converts Y into X; if it finds one, it
"casts" using the implicit. This makes
it possible to look like you're
extending just about any type in
Scala, and it allows for tighter
embeddings of DSLs.
From the above excerpt it is clear that Scala's approach to unify OO and FP paradigms is far more superior to that of OCaml or F#.
HTH.
Regards,
Eric.
The syntax of F# was taken from OCaml but the object model of F# was taken from .NET. This gives F# a light and terse syntax that is characteristic of functional programming languages and at the same time allows F# to interoperate with the existing .NET languages and .NET libraries very smoothly through its object model.
Scala does a similar job on the JVM that F# does on the CLR. However Scala has chosen to adopt a more Java-like syntax. This may assist in its adoption by object-oriented programmers but to a functional programmer it can feel a bit heavy. Its object model is similar to Java's allowing for seamless interoperation with Java but has some interesting differences such as support for traits.
If functional programming means programming with functions, then Scala bends that a bit. In Scala, if I understand correctly, you're programming with methods instead of functions.
When the class (and the object of that class) behind the method don't matter, Scala will let you pretend it's just a function. Perhaps a Scala language lawyer can elaborate on this distinction (if it even is a distinction), and any consequences.