Scala versus F# question: how do they unify OO and FP paradigms? - scala

What are the key differences between the approaches taken by Scala and F# to unify OO and FP paradigms?
EDIT
What are the relative merits and demerits of each approach? If, in spite of the support for subtyping, F# can infer the types of function arguments then why can't Scala?

I have looked at F#, doing low level tutorials, so my knowledge of it is very limited. However, it was apparent to me that its style was essentially functional, with OO being more like an add on -- much more of an ADT + module system than true OO. The feeling I get can be best described as if all methods in it were static (as in Java static).
See, for instance, any code using the pipe operator (|>). Take this snippet from the wikipedia entry on F#:
[1 .. 10]
|> List.map fib
(* equivalent without the pipe operator *)
List.map fib [1 .. 10]
The function map is not a method of the list instance. Instead, it works like a static method on a List module which takes a list instance as one of its parameters.
Scala, on the other hand, is fully OO. Let's start, first, with the Scala equivalent of that code:
List(1 to 10) map fib
// Without operator notation or implicits:
List.apply(Predef.intWrapper(1).to(10)).map(fib)
Here, map is a method on the instance of List. Static-like methods, such as intWrapper on Predef or apply on List, are much more uncommon. Then there are functions, such as fib above. Here, fib is not a method on int, but neither it is a static method. Instead, it is an object -- the second main difference I see between F# and Scala.
Let's consider the F# implementation from the Wikipedia, and an equivalent Scala implementation:
// F#, from the wiki
let rec fib n =
match n with
| 0 | 1 -> n
| _ -> fib (n - 1) + fib (n - 2)
// Scala equivalent
def fib(n: Int): Int = n match {
case 0 | 1 => n
case _ => fib(n - 1) + fib(n - 2)
}
The above Scala implementation is a method, but Scala converts that into a function to be able to pass it to map. I'll modify it below so that it becomes a method that returns a function instead, to show how functions work in Scala.
// F#, returning a lambda, as suggested in the comments
let rec fib = function
| 0 | 1 as n -> n
| n -> fib (n - 1) + fib (n - 2)
// Scala method returning a function
def fib: Int => Int = {
case n # (0 | 1) => n
case n => fib(n - 1) + fib(n - 2)
}
// Same thing without syntactic sugar:
def fib = new Function1[Int, Int] {
def apply(param0: Int): Int = param0 match {
case n # (0 | 1) => n
case n => fib.apply(n - 1) + fib.apply(n - 2)
}
}
So, in Scala, all functions are objects implementing the trait FunctionX, which defines a method called apply. As shown here and in the list creation above, .apply can be omitted, which makes function calls look just like method calls.
In the end, everything in Scala is an object -- and instance of a class -- and every such object does belong to a class, and all code belong to a method, which gets executed somehow. Even match in the example above used to be a method, but has been converted into a keyword to avoid some problems quite a while ago.
So, how about the functional part of it? F# belongs to one of the most traditional families of functional languages. While it doesn't have some features some people think are important for functional languages, the fact is that F# is function by default, so to speak.
Scala, on the other hand, was created with the intent of unifying functional and OO models, instead of just providing them as separate parts of the language. The extent to which it was succesful depends on what you deem to be functional programming. Here are some of the things that were focused on by Martin Odersky:
Functions are values. They are objects too -- because all values are objects in Scala -- but the concept that a function is a value that can be manipulated is an important one, with its roots all the way back to the original Lisp implementation.
Strong support for immutable data types. Functional programming has always been concerned with decreasing the side effects on a program, that functions can be analysed as true mathematical functions. So Scala made it easy to make things immutable, but it did not do two things which FP purists criticize it for:
It did not make mutability harder.
It does not provide an effect system, by which mutability can be statically tracked.
Support for Algebraic Data Types. Algebraic data types (called ADT, which confusingly also stands for Abstract Data Type, a different thing) are very common in functional programming, and are most useful in situations where one commonly use the visitor pattern in OO languages.
As with everything else, ADTs in Scala are implemented as classes and methods, with some syntactic sugars to make them painless to use. However, Scala is much more verbose than F# (or other functional languages, for that matter) in supporting them. For example, instead of F#'s | for case statements, it uses case.
Support for non-strictness. Non-strictness means only computing stuff on demand. It is an essential aspect of Haskell, where it is tightly integrated with the side effect system. In Scala, however, non-strictness support is quite timid and incipient. It is available and used, but in a restricted manner.
For instance, Scala's non-strict list, the Stream, does not support a truly non-strict foldRight, such as Haskell does. Furthermore, some benefits of non-strictness are only gained when it is the default in the language, instead of an option.
Support for list comprehension. Actually, Scala calls it for-comprehension, as the way it is implemented is completely divorced from lists. In its simplest terms, list comprehensions can be thought of as the map function/method shown in the example, though nesting of map statements (supports with flatMap in Scala) as well as filtering (filter or withFilter in Scala, depending on strictness requirements) are usually expected.
This is a very common operation in functional languages, and often light in syntax -- like in Python's in operator. Again, Scala is somewhat more verbose than usual.
In my opinion, Scala is unparalled in combining FP and OO. It comes from the OO side of the spectrum towards the FP side, which is unusual. Mostly, I see FP languages with OO tackled on it -- and it feels tackled on it to me. I guess FP on Scala probably feels the same way for functional languages programmers.
EDIT
Reading some other answers I realized there was another important topic: type inference. Lisp was a dynamically typed language, and that pretty much set the expectations for functional languages. The modern statically typed functional languages all have strong type inference systems, most often the Hindley-Milner1 algorithm, which makes type declarations essentially optional.
Scala can't use the Hindley-Milner algorithm because of Scala's support for inheritance2. So Scala has to adopt a much less powerful type inference algorithm -- in fact, type inference in Scala is intentionally undefined in the specification, and subject of on-going improvements (it's improvement is one of the biggest features of the upcoming 2.8 version of Scala, for instance).
In the end, however, Scala requires all parameters to have their types declared when defining methods. In some situations, such as recursion, return types for methods also have to be declared.
Functions in Scala can often have their types inferred instead of declared, though. For instance, no type declaration is necessary here: List(1, 2, 3) reduceLeft (_ + _), where _ + _ is actually an anonymous function of type Function2[Int, Int, Int].
Likewise, type declaration of variables is often unnecessary, but inheritance may require it. For instance, Some(2) and None have a common superclass Option, but actually belong to different subclases. So one would usually declare var o: Option[Int] = None to make sure the correct type is assigned.
This limited form of type inference is much better than statically typed OO languages usually offer, which gives Scala a sense of lightness, and much worse than statically typed FP languages usually offer, which gives Scala a sense of heavyness. :-)
Notes:
Actually, the algorithm originates from Damas and Milner, who called it "Algorithm W", according to the wikipedia.
Martin Odersky mentioned in a comment here that:
The reason Scala does not have Hindley/Milner type inference is
that it is very difficult to combine with features such as
overloading (the ad-hoc variant, not type classes), record
selection, and subtyping
He goes on to state that it may not be actually impossible, and it came down to a trade-off. Please do go to that link for more information, and, if you do come up with a clearer statement or, better yet, some paper one way or another, I'd be grateful for the reference.
Let me thank Jon Harrop for looking this up, as I was assuming it was impossible. Well, maybe it is, and I couldn't find a proper link. Note, however, that it is not inheritance alone causing the problem.

F# is functional - It allows OO pretty well, but the design and philosophy is functional nevertheless. Examples:
Haskell-style functions
Automatic currying
Automatic generics
Type inference for arguments
It feels relatively clumsy to use F# in a mainly object-oriented way, so one could describe the main goal as to integrate OO into functional programming.
Scala is multi-paradigm with focus on flexibility. You can choose between authentic FP, OOP and procedural style depending on what currently fits best. It's really about unifying OO and functional programming.

There are quite a few points that you can use for comparing the two (or three). First, here are some notable points that I can think of:
Syntax
Syntactically, F# and OCaml are based on the functional programming tradition (space separated and more lightweight), while Scala is based on the object-oriented style (although Scala makes it more lightweight).
Integrating OO and FP
Both F# and Scala very smoothly integrate OO with FP (because there is no contradiction between these two!!) You can declare classes to hold immutable data (functional aspect) and provide members related to working with the data, you can also use interfaces for abstraction (object-oriented aspects). I'm not as familiar with OCaml, but I would think that it puts more emphasis on the OO side (compared to F#)
Programming style in F#
I think that the usual programming style used in F# (if you don't need to write .NET library and don't have other limitations) is probably more functional and you'd use OO features only when you need to. This means that you group functionality using functions, modules and algebraic data types.
Programming style in Scala
In Scala, the default programming style is more object-oriented (in the organization), however you still (probably) write functional programs, because the "standard" approach is to write code that avoids mutation.

What are the key differences between the approaches taken by Scala and F# to unify OO and FP paradigms?
The key difference is that Scala tries to blend the paradigms by making sacrifices (usually on the FP side) whereas F# (and OCaml) generally draw a line between the paradigms and let the programmer choose between them for each task.
Scala had to make sacrifices in order to unify the paradigms. For example:
First-class functions are an essential feature of any functional language (ML, Scheme and Haskell). All functions are first-class in F#. Member functions are second-class in Scala.
Overloading and subtypes impede type inference. F# provides a large sublanguage that sacrifices these OO features in order to provide powerful type inference when these features are not used (requiring type annotations when they are used). Scala pushes these features everywhere in order to maintain consistent OO at the cost of poor type inference everywhere.
Another consequence of this is that F# is based upon tried and tested ideas whereas Scala is pioneering in this respect. This is ideal for the motivations behind the projects: F# is a commercial product and Scala is programming language research.
As an aside, Scala also sacrificed other core features of FP such as tail-call optimization for pragmatic reasons due to limitations of their VM of choice (the JVM). This also makes Scala much more OOP than FP. Note that there is a project to bring Scala to .NET that will use the CLR to do genuine TCO.
What are the relative merits and demerits of each approach? If, in spite of the support for subtyping, F# can infer the types of function arguments then why can't Scala?
Type inference is at odds with OO-centric features like overloading and subtypes. F# chose type inference over consistency with respect to overloading. Scala chose ubiquitous overloading and subtypes over type inference. This makes F# more like OCaml and Scala more like C#. In particular, Scala is no more a functional programming language than C# is.
Which is better is entirely subjective, of course, but I personally much prefer the tremendous brevity and clarity that comes from powerful type inference in the general case. OCaml is a wonderful language but one pain point was the lack of operator overloading that required programmers to use + for ints, +. for floats, +/ for rationals and so on. Once again, F# chooses pragmatism over obsession by sacrificing type inference for overloading specifically in the context of numerics, not only on arithmetic operators but also on arithmetic functions such as sin. Every corner of the F# language is the result of carefully chosen pragmatic trade-offs like this. Despite the resulting inconsistencies, I believe this makes F# far more useful.

From this article on Programming Languages:
Scala is a rugged, expressive,
strictly superior replacement for
Java. Scala is the programming
language I would use for a task like
writing a web server or an IRC client.
In contrast to OCaml [or F#], which was a
functional language with an
object-oriented system grafted to it,
Scala feels more like an true hybrid
of object-oriented and functional
programming. (That is, object-oriented
programmers should be able to start
using Scala immediately, picking up
the functional parts only as they
choose to.)
I first learned about Scala at POPL
2006 when Martin Odersky gave an
invited talk on it. At the time I saw
functional programming as strictly
superior to object-oriented
programming, so I didn't see a need
for a language that fused functional
and object-oriented programming. (That
was probably because all I wrote back
then were compilers, interpreters and
static analyzers.)
The need for Scala didn't become
apparent to me until I wrote a
concurrent HTTPD from scratch to
support long-polled AJAX for yaplet.
In order to get good multicore
support, I wrote the first version in
Java. As a language, I don't think
Java is all that bad, and I can enjoy
well-done object-oriented programming.
As a functional programmer, however,
the lack of (or needlessly verbose)
support of functional programming
features (like higher-order functions)
grates on me when I program in Java.
So, I gave Scala a chance.
Scala runs on the JVM, so I could
gradually port my existing project
into Scala. It also means that Scala,
in addition to its own rather large
library, has access to the entire Java
library as well. This means you can
get real work done in Scala.
As I started using Scala, I became
impressed by how cleverly the
functional and object-oriented worlds
blended together. In particular, Scala
has a powerful case
class/pattern-matching system that
addressed pet peeves lingering from my
experiences with Standard ML, OCaml
and Haskell: the programmer can decide
which fields of an object should be
matchable (as opposed to being forced
to match on all of them), and
variable-arity arguments are
permitted. In fact, Scala even allows
programmer-defined patterns. I write a
lot of functions that operate on
abstract syntax nodes, and it's nice
to be able to match on only the
syntactic children, but still have
fields for things such as annotations
or lines in the original program. The
case class system lets one split the
definition of an algebraic data type
across multiple files or across
multiple parts of the same file, which
is remarkably handy.
Scala also
supports well-defined multiple
inheritance through class-like devices
called traits.
Scala also allows a
considerable degree of overloading;
even function application and array
update can be overloaded. In my
experience, this tends to make my
Scala programs more intuitive and
concise.
One feature that turns out to save a
lot of code, in the same way that type
classes save code in Haskell, is
implicits. You can imagine implicits
as an API for the error-recovery phase
of the type-checker. In short, when
the type checker needs an X but got a
Y, it will check to see if there's an
implicit function in scope that
converts Y into X; if it finds one, it
"casts" using the implicit. This makes
it possible to look like you're
extending just about any type in
Scala, and it allows for tighter
embeddings of DSLs.
From the above excerpt it is clear that Scala's approach to unify OO and FP paradigms is far more superior to that of OCaml or F#.
HTH.
Regards,
Eric.

The syntax of F# was taken from OCaml but the object model of F# was taken from .NET. This gives F# a light and terse syntax that is characteristic of functional programming languages and at the same time allows F# to interoperate with the existing .NET languages and .NET libraries very smoothly through its object model.
Scala does a similar job on the JVM that F# does on the CLR. However Scala has chosen to adopt a more Java-like syntax. This may assist in its adoption by object-oriented programmers but to a functional programmer it can feel a bit heavy. Its object model is similar to Java's allowing for seamless interoperation with Java but has some interesting differences such as support for traits.

If functional programming means programming with functions, then Scala bends that a bit. In Scala, if I understand correctly, you're programming with methods instead of functions.
When the class (and the object of that class) behind the method don't matter, Scala will let you pretend it's just a function. Perhaps a Scala language lawyer can elaborate on this distinction (if it even is a distinction), and any consequences.

Related

Scala: why it is possible to have Some(None)?

>Option(None)
res2: Option[None.type] = Some(None)
Why it is possible? Why Option of None doesn't returns None?
Scala (like most statically typed functional programming languages) is built out of pieces that can be composed together in consistent ways. This is in contrast with other programming languages and libraries (often dynamic ones) that attempt to predict the programmer's intentions and often support this by having lots of special cases (automatic flattening of nested constructions, etc.).
In Scala Option is just a type constructor—you can create an Option[A] for literally any type A by writing Option(a). Option[Int] is itself a type, for example, so you could have an Option[Option[Int]], an Option[Option[Option[Int]]], and so on. There are no special cases, just a general mechanism for building up programs.
Not sure if this is a good answer. But try read next article.
http://danielwestheide.com/blog/2012/12/19/the-neophytes-guide-to-scala-part-5-the-option-type.html
Some is actually a wrapper for the value you trying to use. So your code is valid.

Monadic coding style?

I find myself writing my programs more and more in the style like this:
myList
.map(el => ...)
.filter(_....)
.map { el =>
...
...
}
.zipWithIndex
.foreach(println)
Often my subroutine consists entirely of such block of code. I never planned to change my style of coding into this, it just happened naturally as I used Scala more and more.
Is it correct to say that such code is written in "monadic style"? I do have a vague understanding of what a monad is, and I am using Scala's collections here, and those seem to be monadic types. On the other hand I am not creating any monadic types myself, I am just using them. In other words, when I want to say merely that I program in take_something.change_it.change_it.use_it style, is it ok to refer to it as "monadic style"?
I would say that programming that way is a consequence of using monads, but I'd only describe it as fully programming in monadic style if you're really relying on monads to program completely free of side-effects, using things like IO monads and State monads. You tend to end up with a whole bunch of nested (as opposed to chained) maps, flatMaps, and filters. The Scala for...yield construct helps organize code that works with multiple nested monads. You may find the book Functional Programming in Scala interesting, for really delving into that style.
Your example is certainly monadic in the smaller sense of using chained transformations on one monadic type. That pattern is used in a lot of code that I wouldn't necessarily describe as "monadic", and is sometimes called method cascading or the fluent pattern.

Can "list comprehension" be considered as "functional programming"?

Scala code:
for {
user <- users
name <- user.names
letter <- name.letters
} yield letter
Can we consider such "list comprehension" code as "functional programming" style? Since they will be converted to map and flatMap functions?
Yes, it's definitely a functional technique, particularly assuming that all of those members are fields or pure functions. It's just syntactic sugar for 0 or more flatMaps followed by 1 map (with if clauses translated to withFilter).
Without the yield at the end, it acts more like the imperative for, translating to 1 or more foreachs; foreach typically being used for executing statements for their side effects.
This article describes the syntax in a bit more detail, this excellent answer talks about it in more depth with some of the monadic theory, and this article describes the actual rules translation explicitly.
The list comprehension construct is found quite often in functional programming languages but it is not distinctive of functional programming. If you think about that also Python, PHP (from version 5.5) and the next version of Javascript (ES6) have similar constructs but that doesn't mean that they are functional.
In the case of scala it is true that your example translate to map and flatMap applications but neither that is enough IMHO to say that it is functional. Consider the case:
for {
i <- 1 until 10
} println(i)
This is still a for comprehension but it is actually doing side-effects as any imperative language (this cycle actually translates to a foreach invocation).
The bottom-line in my opinion is that Functional Programming is not much about constructs as it is about about style: the real important thing for a piece of code to be in FP style is to be side-effect free (or, as in many cases, to be honest about when side-effects happen).
If you want you can do FP even in Java 7: use anonymous classes as closures, mark everything as final, avoid any mutable state and isolate side-effects into special constructs and you are done. It will be terribly verbose and probably ugly because the language doesn't support the kind of abstractions that help in making this style nice in practice, but it will nevertheless be in functional-style.
Can we consider such "list comprehension" code as "functional programming" style?
Monadic list comprehensions with imperative/generator syntax are a relatively new syntactic and semantic innovation.
The original list comprehensions (e.g. NPL or Miranda) were modelled on set comprehensions, and are clearly a declarative construct, albeit one that is translated to nested functions.
Haskell's list comprehensions function in a similar way.
Monadic comprehensions (compiled into monadic guard and binds) should surely be considered functional if we consider monads to be a functional construct.

How pure and lazy can Scala be?

This is just one of those "I was wondering..." questions.
Scala has immutable data structures and (optional) lazy vals etc.
How close can a Scala program be to one that is fully pure (in a functional programming sense) and fully lazy (or as Ingo points out, can it be sufficiently non-strict)? What values are unavoidably mutable and what evaluation unavoidably greedy?
Regarding lazyness - currently, passing a parameter to a method is by default strict:
def square(a: Int) = a * a
but you use call-by-name parameters:
def square(a: =>Int) = a * a
but this is not lazy in the sense that it computes the value only once when needed:
scala> square({println("calculating");5})
calculating
calculating
res0: Int = 25
There's been some work into adding lazy method parameters, but it hasn't been integrated yet (the below declaration should print "calculating" from above only once):
def square(lazy a: Int) = a * a
This is one piece that is missing, although you could simulate it with a local lazy val:
def square(ap: =>Int) = {
lazy val a = ap
a * a
}
Regarding mutability - there is nothing holding you back from writing immutable data structures and avoid mutation. You can do this in Java or C as well. In fact, some immutable data structures rely on the lazy primitive to achieve better complexity bounds, but the lazy primitive can be simulated in other languages as well - at the cost of extra syntax and boilerplate.
You can always write immutable data structures, lazy computations and fully pure programs in Scala. The problem is that the Scala programming model allows writing non pure programs as well, so the type checker can't always infer some properties of the program (such as purity) which it could infer given that the programming model was more restrictive.
For example, in a language with pure expressions the a * a in the call-by-name definition above (a: =>Int) could be optimized to evaluate a only once, regardless of the call-by-name semantics. If the language allows side-effects, then such an optimization is not always applicable.
Scala can be as pure and lazy as you like, but a) the compiler won't keep you honest with regards to purity and b) it will take a little extra work to make it lazy. There's nothing too profound about this; you can even write lazy and pure Java code if you really want to (see here if you dare; achieving laziness in Java requires eye-bleeding amounts of nested anonymous inner classes).
Purity
Whereas Haskell tracks impurities via the type system, Scala has chosen not to go that route, and it's difficult to tack that sort of thing on when you haven't made it a goal from the beginning (and also when interoperability with a thoroughly impure language like Java is a major goal of the language).
That said, some believe it's possible and worthwhile to make the effort to document effects in Scala's type system. But I think purity in Scala is best treated as a matter of self-discipline, and you must be perpetually skeptical about the supposed purity of third-party code.
Laziness
Haskell is lazy by default but can be made stricter with some annotations sprinkled in your code... Scala is the opposite: strict by default but with the lazy keyword and by-name parameters you can make it as lazy as you like.
Feel free to keep things immutable. On the other hand, there's no side effect tracking, so you can't enforce or verify it.
As for non-strictness, here's the deal... First, if you choose to go completely non-strict, you'll be forsaking all of Scala's classes. Even Scalaz is not non-strict for the most part. If you are willing to build everything yourself, you can make your methods non-strict and your values lazy.
Next, I wonder if implicit parameters can be non-strict or not, or what would be the consequences of making them non-strict. I don't see a problem, but I could be wrong.
But, most problematic of all, function parameters are strict, and so are closures parameters.
So, while it is theoretically possible to go fully non-strict, it will be incredibly inconvenient.

Disadvantages of Scala type system versus Haskell?

I have read that Scala's type system is weakened by Java interoperability and therefore cannot perform some of the same powers as Haskell's type system. Is this true? Is the weakness because of type erasure, or am I wrong in every way? Is this difference the reason that Scala has no typeclasses?
The big difference is that Scala doesn't have Hindley-Milner global type inference and instead uses a form of local type inference, requiring you to specify types for method parameters and the return type for overloaded or recursive functions.
This isn't driven by type erasure or by other requirements of the JVM. All possible difficulties here can be overcome, and have been, just consider Jaskell - http://docs.codehaus.org/display/JASKELL/Home
H-M inference doesn't work in an object-oriented context. Specifically, when type-polymorphism is used (as opposed to the ad-hoc polymorphism of type classes). This is crucial for strong interop with other Java libraries, and (to a lesser extent) to get the best possible optimisation from the JVM.
It's not really valid to state that either Haskell or Scala has a stronger type system, just that they are different. Both languages are pushing the boundaries for type-based programming in different directions, and each language has unique strengths that are hard to duplicate in the other.
Scala's type system is different from Haskell's, although Scala's concepts are sometimes directly inspired by Haskell's strengths and its knowledgeable community of researchers and professionals.
Of course, running on a VM not primarily intended for functional programming in the first place creates some compatibility concerns with existing languages targeting this platform.
Because most of the reasoning about types happens at compile time, the limitations of Java (as a language and as a platform) at runtime are nothing to be concerned about (except Type Erasure, although exactly this bug seems to make the integration into the Java ecosystem more seamless).
As far as I know the only "compromise" on the type system level with Java is a special syntax to handle Raw Types. While Scala doesn't even allow Raw Types anymore, it accepts older Java class files with that bug.
Maybe you have seen code like List[_] (or the longer equivalent List[T] forSome { type T }). This is a compatibility feature with Java, but is treated as an existential type internally too and doesn't weaken the type system.
Scala's type system does support type classes, although in a more verbose way than Haskell. I suggest reading this paper, which might create a different impression on the relative strength of Scala's type system (the table on page 17 serves as a nice list of very powerful type system concepts).
Not necessarily related to the power of the type system is the approach Scala's and Haskell's compilers use to infer types, although it has some impact on the way people write code.
Having a powerful type inference algorithm can make it worthwhile to write more abstract code (you can decide yourself if that is a good thing in all cases).
In the end Scala's and Haskell's type system are driven by the desire to provide their users with the best tools to solve their problems, but have taken different paths to that goal.
another interesting point to consider is that Scala directly supports the classical OO-style. Which means, there are subtype relations (e.g. List is a subclass of Seq). And this makes type inference more tricky. Add to this the fact that you can mix in traits in Scala, which means that a given type can have multiple supertype relations (making it yet more tricky)
Scala does not have rank-n types, although it may be possible to work around this limitation in certain cases.
I only have little experenice with Haskell, but the most obvious thing I note that Scala type system different from Haskell is the type inference.
In Scala, there is no global type inference, you must explicit tell the type of function arguments.
For example, in Scala you need to write this:
def add (x: Int, y: Int) = x + y
instead of
add x y = x + y
This may cause problem when you need generic version of add function that work with all kinds of type has the "+" method. There is a workaround for this, but it will get more verbose.
But in real use, I found Scala's type system is powerful enough for daily usage, and I almost never use those workaround for generic, maybe this is because I come from Java world.
And the limitation of explicit declare the type of arguments is not necessary a bad thing, you need document it anyway.
Well are they Turing reducible?
See Oleg Kiselyov's page http://okmij.org/ftp/
...
One can implement the lambda calculus in Haskell's type system. If Scala can do that, then in a sense Haskell's type system and Scala's type system compute the same types. The questions are: How natural is one over the other? How elegant is one over the other?