Related
What constitutes a value in pure functional programming?
I am asking myself these questions after seeing a sentence:
Task(or IO) has a constructor that captures side-effects as values.
Is a function a value?
If so, what does it mean when equating two functions: assert(f == g). For two functions that are equivalent but defined separately => f != g, why don't they work as 1 == 1?
Is an object with methods a value? (for example IO { println("") })
Is an object with setter methods and mutable state a value?
Is an object with mutable state which works as a state machine a value?
How do we test whether something is a value? Is immutability a sufficient condition?
UPDATE:
I'm using Scala.
I'll try to explain what a value is by contrasting it with things that are not values.
Roughly speaking, values are structures produced by the process of evaluation which correspond to terms that cannot be simplified any further.
Terms
First, what are terms? Terms are syntactic structures that can be evaluated. Admittedly, this is a bit circular, so let's look at a few examples:
Constant literals are terms:
42
Functions applied to other terms are terms:
atan2(123, 456 + 789)
Function literals are terms
(x: Int) => x * x
Constructor invocations are terms:
Option(42)
Contrast this to:
Class declarations / definitions are not terms:
case class Foo(bar: Int)
that is, you cannot write
val x = (case class Foo(bar: Int))
this would be illegal.
Likewise, trait and type definitions are not terms:
type Bar = Int
sealed trait Baz
Unlike function literals, method definitions are not terms:
def foo(x: Int) = x * x
for example:
val x = (a: Int) => a * 2 // function literal, ok
val y = (def foo(a: Int): Int = a * 2) // no, not a term
Package declarations and import statements are not terms:
import foo.bar.baz._ // ok
List(package foo, import bar) // no
Normal forms, values
Now, when it is hopefully somewhat clearer what a term is, what was meant by "cannot be simplified any further*? In idealized functional programming languages, you can define what a normal form, or rather weak head normal form is. Essentially, a term is in a (wh-) normal form if no reduction rules can be applied to the term to make it any simpler. Again, a few examples:
This is a term, but it's not in normal form, because it can be reduced to 42:
40 + 2
This is not in weak head normal form:
((x: Int) => x * 2)(3)
because we can further evaluate it to 6.
This lambda is in weak head normal form (it's stuck, because the computation cannot proceed until an x is supplied):
(x: Int) => x * 42
This is not in normal form, because it can be simplified further:
42 :: List(10 + 20, 20 + 30)
This is in normal form, no further simplifications possible:
List(42, 30, 50)
Thus,
42,
(x: Int) => x * 42,
List(42, 30, 50)
are values, whereas
40 + 2,
((x: Int) => x * 2)(3),
42 :: List(10 + 20, 20 + 30)
are not values, but merely non-normalized terms that can be further simplified.
Examples and non-examples
I'll just go through your list of sub-questions one-by-one:
Is a function a value
Yes, things like (x: T1, ..., xn: Tn) => body are considered to be stuck terms in WHNF, in functional languages they can actually be represented, so they are values.
If so, what does it mean when equating two functions: assert(f == g) for two functions that are equivalent but defined separately => f != g, why don't they work as 1 == 1?
Function extensionality is somewhat unrelated to the question whether something is a value or not. In the above "definition by example", I talked only about the shape of the terms, not about the existence / non-existence of some computable relations defined on those terms. The sad fact is that you can't even really determine whether a lambda-expression actually represents a function (i.e. whether it terminates for all inputs), and it is also known that there cannot be an algorithm that could determine whether two functions produce the same output for all inputs (i.e. are extensionally equal).
Is an object with methods a value? (for example IO { println("") })
Not quite clear what you're asking here. Objects don't have methods. Classes have methods. If you mean method invocations, then, no, they are terms that can be further simplified (by actually running the method), so they are not values.
Is an object with setter methods and mutable state a value?
Is an object with mutable state which works as a state machine a value?
There is no such thing in pure functional programming.
What constitutes a value in pure functional programming?
Background
In pure functional programming there is no mutation. Hence, code such as
case class C(x: Int)
val a = C(42)
val b = C(42)
would become equivalent to
case class C(x: Int)
val a = C(42)
val b = a
since, in pure functional programming, if a.x == b.x, then we would have a == b. That is, a == b would be implemented comparing the values inside.
However, Scala is not pure, since it allows mutation, like Java. In such case, we do NOT have the equivalence between the two snippets above, when we declare case class C(var x: Int). Indeed, performing a.x += 1 afterwords does not affect b.x in the first snippet, but does in the second one, where a and b point to the same object. In such case, it is useful to have a comparison a == b which compares the object references, rather than its inner integer value.
When using case class C(x: Int), Scala comparisons a == b behave closer to pure functional programming, comparing the integers values. With regular (non case) classes, Scala instead compares object references breaking the equivalence between the two snippets. But, again, Scala is not pure. By comparison, in Haskell
data C = C Int deriving (Eq)
a = C 42
b = C 42
is indeed equivalent to
data C = C Int deriving (Eq)
a = C 42
b = a
since there are no "references" or "object identities" in Haskell. Note that the Haskell implementation likely will allocate two "objects" in the first snippet, and only one object in the second one, but since there is no way to tell them apart inside Haskell, the program output will be the same.
Answer
Is a function a value ? (then what it means when equating two function: assert(f==g). For two function that is equivalent but defined separately => f!=g, why not they work like 1==1)
Yes, functions are values in pure functional programming.
Above, when you mention "function that is equivalent but defined separately", you are assuming that we can compare the "references" or "object identities" for these two functions. In pure functional programming we can not.
Pure functional programming should compare functions making f == g equivalent to f x == g x for all possible arguments x. This is feasible when there is only a few values for x, e.g. if f,g :: Bool -> Int we only need to check x=True, x=False. For functions having infinite domains, this is much harder. For instance, if f,g :: String -> Int we can not check infinitely many strings.
Theoretical computer science (computability theory) also proved that there is no algorithm to compare two functions String -> Int, not even an inefficient algorithm, not even if we have access to the source code of the two functions. For this mathematical reason, we must accept that functions are values that can not be compared. In Haskell, we express this through the Eq typeclass, stating that almost all the standard types are comparable, functions being the exception.
Is an object with methods a value ? (for example, IO{println("")})
Yes. Roughly speaking, "everything is a value", including IO actions.
Is an object with setter methods and mutable states a value ?
Is an object with mutable states and works as a state machine a value ?
There is no mutable state in pure functional programming.
At best, the setters can produce a "new" object with the modified fields.
And yes, the object would be a value.
How do we test if it is a value, is that immutable can be a sufficient condition to be a value ?
In pure functional programming, we can only have immutable data.
In impure functional programming, I think we can call most immutable objects "values", when we do not compare object references. If the "immutable" object contains a reference to a mutable object, e.g.
case class D(var x: Int)
case class C(c: C)
val a = C(D(42))
then things are more tricky. I guess we could still call a "immutable", since we can not alter a.c, but we should be careful since a.c.x can be mutated.
Depending on the intent, I think that some would not call a immutable. I would not consider a to be a value.
To make things more muddy, in impure programming, there are objects which use mutation to present a "pure" interface in an efficient way. For instance one can write a pure function that, before returning, stores its result in a cache. When called again on the same argument, it will return the previously computed result
(this is usually called memoization). Here, mutation happens, but it is not observable from outside, where at most we can observe a faster implementation. In this case, we can simply pretend the that function is pure (even if it performs mutation) and consider it a "value".
The contrast with imperative languages is stark. In inperitive languages, like Python, the output of a function is directed. It can be assigned to a variable, explicitly returned, printed or written to a file.
When I compose a function in Haskell, I never consider output. I never use "return" Everything has "a" value. This is called "symbolic" programming. By "everything", is meant "symbols". Like human language, the nouns and verbs represent something. That something is their value. The "value" of "Pete" is Pete. The name "Pete" is not Pete but is a representation of Pete, the person. The same is true of functional programming. The best analogy is math or logic When you do pages of calculations, do you direct the output of each function? You even "assign" variables to be replaced by their "value" in functions or expressions.
Values are
Immutable/Timeless
Anonymous
Semantically Transparent
What is the value of 42? 42. What is the "value" of new Date()? Date object at 0x3fa89c3. What is the identity of 42? 42. What is the identity of new Date()? As we saw in the previous example, it's the thing that lives at the place. It may have many different "values" in different contexts but it has only one identity. OTOH, 42 is sufficient unto itself. It's semantically meaningless to ask where 42 lives in the system. What is the semantic meaning of 42? Magnitude of 42. What is the semantic meaning of new Foo()? Who knows.
I would add a fourth criterion (see this in some contexts in the wild but not others) which is: values are language agnostic (I'm not certain the first 3 are sufficient to guarantee this nor that such a rule is entirely consistent with most people's intuition of what value means).
Values are things that
functions can take as inputs and return as outputs, that is, can be computed, and
are members of a type, that is, elements of some set, and
can be bound to a variable, that is, can be named.
First point is really the crucial test whether something is a value. Perhaps the word value, due to conditioning, might immediately make us think of just numbers, but the concept is very general. Essentially anything we can give to and get out of a function can be considered a value. Numbers, strings, booleans, instances of classes, functions themselves, predicates, and even types themselves, can be inputs and outputs of functions, and thus are values.
IO monad is a great example of how general this concept is. When we say IO monad models side-effects as values, we mean a function can take a side-effect (say println) as input and return as output. IO(println(...)) separates the idea of the effect of an action of println from the actual execution of an action, and allows these effects to be considered as first class values that can be computed with using the same language facilities as for any other values such as numbers.
More explicitly, can this code produce any errors in any scenrios:
def foreach[U](f :Int=>U) = f.asInstanceOf[Int=>Unit](1)
I know it works, and I have a vague idea why: any function, as an instance of a generic type, must define an erased version of apply and jvm performs type check only when the object is actually to be returned to a code where it had a concrete type (often miles away). So, in theory, as long as I never look at the returned value, I should be safe. I don't have an enough low-level understandings of java byte code, let alone scalac, to have any certainty about it.
Why would I want to do it? Look at the following example:
val b = new mutable.Buffer[Int]
val ints = Seq(1, 2, 3, 4)
ints foreach { b += _ }
It's a typical scala construct, as far as imperative style can be typical. foreach in this example takes an Int as an argument, and as scalac knows it to be an Int, it will create a closure with a specialized apply(x :Int). Unfortunately, its return type in this case is a mutable.Buffer[Int], which is an AnyRef. As far as I was able to see, scalac will never invoke a specialized apply providing an AnyVal argument if the result is an AnyRef (and vice versa). This means, that even if the caller applies the function to Int, underneath the function will box the argument and invoke the erased variant. Here of course it doesn't matter as they are boxed within the List anyway, but I'm talking about the principle.
For this reason I prefer to define this type of method as foreach(f :X=>Unit), rather than foreach[O](f: X=>O) as it is in TraversableOnce. If the input sequence in the example had such a signature, everything would compile just as fine, and the compiler would ignore the actual type of the expression and generate a function with Unit return type, which - when applied to an unboxed Int - would invoke directly void apply(Int x), without boxing.
The problem arises with interoperability - sometimes I need to call a method expecting a function with a Unit return type and all I have is a generic function returning Odin knows what. Of course, I could just write f(_) to box it in another function object instead of passing it directly, but it to large extent makes the whole optimisation of small tight loops moot.
I have been looking into FP languages (off and on) for some time and have played with Scala, Haskell, F#, and some others. I like what I see and understand some of the fundamental concepts of FP (with absolutely no background in Category Theory - so don't talk Math, please).
So, given a type M[A] we have map which takes a function A=>B and returns a M[B]. But we also have flatMap which takes a function A=>M[B] and returns a M[B]. We also have flatten which takes a M[M[A]] and returns a M[A].
In addition, many of the sources I have read describe flatMap as map followed by flatten.
So, given that flatMap seems to be equivalent to flatten compose map, what is its purpose? Please don't say it is to support 'for comprehensions' as this question really isn't Scala-specific. And I am less concerned with the syntactic sugar than I am in the concept behind it. The same question arises with Haskell's bind operator (>>=). I believe they both are related to some Category Theory concept but I don't speak that language.
I have watched Brian Beckman's great video Don't Fear the Monad more than once and I think I see that flatMap is the monadic composition operator but I have never really seen it used the way he describes this operator. Does it perform this function? If so, how do I map that concept to flatMap?
BTW, I had a long writeup on this question with lots of listings showing experiments I ran trying to get to the bottom of the meaning of flatMap and then ran into this question which answered some of my questions. Sometimes I hate Scala implicits. They can really muddy the waters. :)
FlatMap, known as "bind" in some other languages, is as you said yourself for function composition.
Imagine for a moment that you have some functions like these:
def foo(x: Int): Option[Int] = Some(x + 2)
def bar(x: Int): Option[Int] = Some(x * 3)
The functions work great, calling foo(3) returns Some(5), and calling bar(3) returns Some(9), and we're all happy.
But now you've run into the situation that requires you to do the operation more than once.
foo(3).map(x => foo(x)) // or just foo(3).map(foo) for short
Job done, right?
Except not really. The output of the expression above is Some(Some(7)), not Some(7), and if you now want to chain another map on the end you can't because foo and bar take an Int, and not an Option[Int].
Enter flatMap
foo(3).flatMap(foo)
Will return Some(7), and
foo(3).flatMap(foo).flatMap(bar)
Returns Some(15).
This is great! Using flatMap lets you chain functions of the shape A => M[B] to oblivion (in the previous example A and B are Int, and M is Option).
More technically speaking; flatMap and bind have the signature M[A] => (A => M[B]) => M[B], meaning they take a "wrapped" value, such as Some(3), Right('foo), or List(1,2,3) and shove it through a function that would normally take an unwrapped value, such as the aforementioned foo and bar. It does this by first "unwrapping" the value, and then passing it through the function.
I've seen the box analogy being used for this, so observe my expertly drawn MSPaint illustration:
This unwrapping and re-wrapping behavior means that if I were to introduce a third function that doesn't return an Option[Int] and tried to flatMap it to the sequence, it wouldn't work because flatMap expects you to return a monad (in this case an Option)
def baz(x: Int): String = x + " is a number"
foo(3).flatMap(foo).flatMap(bar).flatMap(baz) // <<< ERROR
To get around this, if your function doesn't return a monad, you'd just have to use the regular map function
foo(3).flatMap(foo).flatMap(bar).map(baz)
Which would then return Some("15 is a number")
It's the same reason you provide more than one way to do anything: it's a common enough operation that you may want to wrap it.
You could ask the opposite question: why have map and flatten when you already have flatMap and a way to store a single element inside your collection? That is,
x map f
x filter p
can be replaced by
x flatMap ( xi => x.take(0) :+ f(xi) )
x flatMap ( xi => if (p(xi)) x.take(0) :+ xi else x.take(0) )
so why bother with map and filter?
In fact, there are various minimal sets of operations you need to reconstruct many of the others (flatMap is a good choice because of its flexibility).
Pragmatically, it's better to have the tool you need. Same reason why there are non-adjustable wrenches.
The simplest reason is to compose an output set where each entry in the input set may produce more than one (or zero!) outputs.
For example, consider a program which outputs addresses for people to generate mailers. Most people have one address. Some have two or more. Some people, unfortunately, have none. Flatmap is a generalized algorithm to take a list of these people and return all of the addresses, regardless of how many come from each person.
The zero output case is particularly useful for monads, which often (always?) return exactly zero or one results (think Maybe- returns zero results if the computation fails, or one if it succeeds). In that case you want to perform an operation on "all of the results", which it just so happens may be one or many.
The "flatMap", or "bind", method, provides an invaluable way to chain together methods that provide their output wrapped in a Monadic construct (like List, Option, or Future). For example, suppose you have two methods that produce a Future of a result (eg. they make long-running calls to databases or web service calls or the like, and should be used asynchronously):
def fn1(input1: A): Future[B] // (for some types A and B)
def fn2(input2: B): Future[C] // (for some types B and C)
How to combine these? With flatMap, we can do this as simply as:
def fn3(input3: A): Future[C] = fn1(a).flatMap(b => fn2(b))
In this sense, we have "composed" a function fn3 out of fn1 and fn2 using flatMap, which has the same general structure (and so can be composed in turn with further similar functions).
The map method would give us a not-so-convenient - and not readily chainable - Future[Future[C]]. Certainly we can then use flatten to reduce this, but the flatMap method does it in one call, and can be chained as far as we wish.
This is so useful a way of working, in fact, that Scala provides the for-comprehension as essentially a short-cut for this (Haskell, too, provides a short-hand way of writing a chain of bind operations - I'm not a Haskell expert, though, and don't recall the details) - hence the talk you will have come across about for-comprehensions being "de-sugared" into a chain of flatMap calls (along with possible filter calls and a final map call for the yield).
Well, one could argue, you don't need .flatten either. Why not just do something like
#tailrec
def flatten[T](in: Seq[Seq[T], out: Seq[T] = Nil): Seq[T] = in match {
case Nil => out
case head ::tail => flatten(tail, out ++ head)
}
Same can be said about map:
#tailrec
def map[A,B](in: Seq[A], out: Seq[B] = Nil)(f: A => B): Seq[B] = in match {
case Nil => out
case head :: tail => map(tail, out :+ f(head))(f)
}
So, why are .flatten and .map provided by the library? Same reason .flatMap is: convenience.
There is also .collect, which is really just
list.filter(f.isDefinedAt _).map(f)
.reduce is actually nothing more then list.foldLeft(list.head)(f),
.headOption is
list match {
case Nil => None
case head :: _ => Some(head)
}
Etc ...
In the book Functional Programming in Scala MEAP v10, the author mentions
Polymorphic functions are often so constrained by their type that they only have one implementation!
and gives the example
def partial1[A,B,C](a: A, f: (A,B) => C): B => C = (b: B) => f(a, b)
What does he mean by this statement? Are polymorphic functions restrictive?
Here's a simpler example:
def mysteryMethod[A, B](somePair: (A, B)): B = ???
What does this method do? It turns out, that there is only one thing this method can do! You don't need the name of the method, you don't need the implementation of the method, you don't need any documentation. The type tells you everything it could possibly do, and it turns out that "everything" in this case is exactly one thing.
So, what does it do? It takes a pair (A, B) and returns some value of type B. What value does it return? Can it construct a value of type B? No, it can't, because it doesn't know what B is! Can it return a random value of type B? No, because randomness is a side-effect and thus would have to appear in the type signature. Can it go out in the universe and fetch some B? No, because that would be a side-effect and would have to appear in the type signature!
In fact, the only thing it can do is return the value of type B that was passed into it, the second element of the pair. So, this mysteryMethod is really the second method, and its only sensible implementation is:
def second[A, B](somePair: (A, B)): B = somePair._2
Note that in reality, since Scala is neither pure nor total, there are in fact a couple of other things the method could do: throw an exception (i.e. return abnormally), go into an infinite loop (i.e. not return at all), use reflection to figure out the actual type of B and reflectively invoke the constructor to fabricate a new value, etc.
However, assuming purity (the return value may only depend on the arguments), totality (the method must return a value normally) and parametricity (it really doesn't know anything about A and B), then there is in fact an awful lot you can tell about a method by only looking at its type.
Here's another example:
def mysteryMethod(someBoolean: Boolean): Boolean = ???
What could this do? It could always return false and ignore its argument. But then it would be overly constrained: if it always ignores its argument, then it doesn't care that it is a Boolean and its type would rather be
def alwaysFalse[A](something: A): Boolean = false // same for true, obviously
It could always just return its argument, but again, then it wouldn't actually care about booleans, and its type would rather be
def identity[A](something: A): A = something
So, really, the only thing it can do is return a different boolean than the one that was passed in, and since there are only two booleans, we know that our mysteryMethod is, in fact, not:
def not(someBoolean: Boolean): Boolean = if (someBoolean) false else true
So, here, we have an example, where the types don't give us the implementation, but at least, they give as a (small) set of 4 possible implementations, only one of which makes sense.
(By the way: it turns out that there is only one possible implementation of a method which takes an A and returns an A, and it is the identity method shown above.)
So, to recap:
purity means that you can only use the building blocks that were handed to you (the arguments)
a strong, strict, static type system means that you can only use those building blocks in such a way that their types line up
totality means that you can't do stupid things (like infinite loops or throwing exceptions)
parametricity means that you cannot make any assumptions at all about your type variables
Think about your arguments as parts of a machine and your types as connectors on those machine parts. There will only be a limited number of ways that you can connect those machine parts together in a way that you only plug together compatible connectors and you don't have any leftover parts. Often enough, there will be only one way, or if there are multiple ways, then often one will be obviously the right one.
What this means is that, once you have designed the types of your objects and methods, you won't even have to think about how to implement those methods, because the types will already dictate the only possible way to implement them! Considering how many questions on StackOverflow are basically "how do I implement this?", can you imagine how freeing it must be not having to think about that at all, because the types already dictate the one (or one of a few) possible implementation?
Now, look at the signature of the method in your question and try playing around with different ways to combine a and f in such a way that the types line up and you use both a and f and you will indeed see that there is only one way to do that. (As Chris and Paul have shown.)
def partial1[A,B,C](a: A, f: (A,B) => C): B => C = (b: B) => f(a, b)
Here, partial1 takes as parameters value of type A, and a function that takes a parameter of type A and a parameter of type B, returning a value of type C.
partial1 must return a function taking a value of type B and returning a C. Given A, B, and C are arbitary, we cannot apply any functions to their values. So the only possibility is to apply the function f to the value a passed to partial, and the value of type B that is a parameter to the function we return.
So you end up with the single possibility that's in the definition f(a,b)
To take a simpler example, consider the type Option[A] => Boolean. There's only a couple ways to implement this:
def foo1(x: Option[A]): Boolean = x match { case Some(_) => true
case None => false }
def foo2(x: Option[A]): Boolean = !foo1(x)
def foo3(x: Option[A]): Boolean = true
def foo4(x: Option[A]): Boolean = false
The first two choices are pretty much the same, and the last two are trivial, so essentially there's only one useful thing this function could do, which is tell you whether the Option is Some or None.
The space of possible implementation is "restricted" by the abstractness of the function type. Since A is unconstrained, the option's value could be anything, so the function can't depend on that value in any way because you know nothing about what it. The only "understanding" the function may have about its parameter is the structure of Option[_].
Now, back to your example. You have no idea what C is, so there's no way you can construct one yourself. Therefore the function you create is going to have to call f to get a C. And in order to call f, you need to provide an arguments of types A and B. Again, since there's no way to create an A or a B yourself, the only thing you can do is use the arguments that are given to you. So there's no other possible function you could write.
From my understanding, a partially applied function are functions, which we can invoke without
passing all/some of the required arguments.
def add(x:Int, y:Int) = x + y
val paf = add(_ :Int, 3)
val paf1 = add(_ :Int, _ :Int)
In the above example, paf1 refers to partially applied function with all the arguments missing and I can invoke is using: paf1(10,20) and the original function can be invoked using add(10,20)
My question is, what is the extra benefit of creating a partially applied function with all the arguments missing, since the invocation syntax is pretty much the same? Is it just to convert methods into first class functions?
Scala's def keyword is how you define methods and methods are not functions (in Scala). So your add is not a first-class function entity the way your paf1 is, even if they're semantically equivalent in what they do with their arguments in producing a result.
Scala will automatically use partial application to turn a method into an equivalent function, which you can see by extending your example a bit:
def add(x: Int, y: Int) = x + y
...
val pa2: (Int, Int) => Int = add
pa2: (Int, Int) => Int = <function2>
This may seem of little benefit in this example, but in many cases there are non-explicit constraints indicating a function is required (or more accurately, constraints specified explicitly elsewhere) that allow you to simply give a (type-compatible) method name in a place where you need a function.
There is a difference between Methods and functions.
If you look at the declaration of List.map for example, it really expects a function. But the Scala compiler is smart enough to accept both methods and functions.
A quote from here
this trick ... for coercing a method into something where a function is expected, is so easy that even the compiler can detect and do it. In fact, this automatic coercion got an own name – it’s called Eta expansion.
On the other hand, have a look at Java 8; as far as I can tell, it's not that easy there.
Update: the question was, Why would I ever want an eta-expanded method? One of the great rhetorical strategies in the Scala bible is that they lead you to an example over many pages. I employ the Exodus metaphor below because I just saw "The Ten Commandments" with Charlton Heston. I'm not pretending that this answer is more explanatory than Randall's.
You might need someone with lesser rep to note that the big build-up in the bible's book of Exodus is to:
http://www.artima.com/pins1ed/first-steps-in-scala.html#step6
args foreach println.
The previous step among the "first steps" is, indeed,
args foreach (arg => println(arg))
but I'm guessing no one does it that way if the type inference gods are kind.
From the change log in the spec: "a partially unapplied method is now designated m _ instead
of the previous notation &m." That is, at a certain point, the notion of a "function ptr" became a partial function with no args supplied. Which is what it is. Update: "metaphorically."
I'm sure others can synthesize a bunch of use-cases for this, but really it's just a consequence of the fact that functions are values. It would be much weirder if you couldn't pass a function around as a normal value.
Scala's handling of this stuff is a little clunky. Partial application is usually combined with currying. I consider it a quirk that you can basically eta-expand any expression using _. What you're effectively doing (modulo Scala's syntactic quirks around currying) by writing add(_ : Int, _ : Int) is writing (x : Int) => (y : Int) => add(x, y). I'm sure you can think of instances where the latter definition might be useful in a program.