What is "value" in pure functional programming? - scala

What constitutes a value in pure functional programming?
I am asking myself these questions after seeing a sentence:
Task(or IO) has a constructor that captures side-effects as values.
Is a function a value?
If so, what does it mean when equating two functions: assert(f == g). For two functions that are equivalent but defined separately => f != g, why don't they work as 1 == 1?
Is an object with methods a value? (for example IO { println("") })
Is an object with setter methods and mutable state a value?
Is an object with mutable state which works as a state machine a value?
How do we test whether something is a value? Is immutability a sufficient condition?
UPDATE:
I'm using Scala.

I'll try to explain what a value is by contrasting it with things that are not values.
Roughly speaking, values are structures produced by the process of evaluation which correspond to terms that cannot be simplified any further.
Terms
First, what are terms? Terms are syntactic structures that can be evaluated. Admittedly, this is a bit circular, so let's look at a few examples:
Constant literals are terms:
42
Functions applied to other terms are terms:
atan2(123, 456 + 789)
Function literals are terms
(x: Int) => x * x
Constructor invocations are terms:
Option(42)
Contrast this to:
Class declarations / definitions are not terms:
case class Foo(bar: Int)
that is, you cannot write
val x = (case class Foo(bar: Int))
this would be illegal.
Likewise, trait and type definitions are not terms:
type Bar = Int
sealed trait Baz
Unlike function literals, method definitions are not terms:
def foo(x: Int) = x * x
for example:
val x = (a: Int) => a * 2 // function literal, ok
val y = (def foo(a: Int): Int = a * 2) // no, not a term
Package declarations and import statements are not terms:
import foo.bar.baz._ // ok
List(package foo, import bar) // no
Normal forms, values
Now, when it is hopefully somewhat clearer what a term is, what was meant by "cannot be simplified any further*? In idealized functional programming languages, you can define what a normal form, or rather weak head normal form is. Essentially, a term is in a (wh-) normal form if no reduction rules can be applied to the term to make it any simpler. Again, a few examples:
This is a term, but it's not in normal form, because it can be reduced to 42:
40 + 2
This is not in weak head normal form:
((x: Int) => x * 2)(3)
because we can further evaluate it to 6.
This lambda is in weak head normal form (it's stuck, because the computation cannot proceed until an x is supplied):
(x: Int) => x * 42
This is not in normal form, because it can be simplified further:
42 :: List(10 + 20, 20 + 30)
This is in normal form, no further simplifications possible:
List(42, 30, 50)
Thus,
42,
(x: Int) => x * 42,
List(42, 30, 50)
are values, whereas
40 + 2,
((x: Int) => x * 2)(3),
42 :: List(10 + 20, 20 + 30)
are not values, but merely non-normalized terms that can be further simplified.
Examples and non-examples
I'll just go through your list of sub-questions one-by-one:
Is a function a value
Yes, things like (x: T1, ..., xn: Tn) => body are considered to be stuck terms in WHNF, in functional languages they can actually be represented, so they are values.
If so, what does it mean when equating two functions: assert(f == g) for two functions that are equivalent but defined separately => f != g, why don't they work as 1 == 1?
Function extensionality is somewhat unrelated to the question whether something is a value or not. In the above "definition by example", I talked only about the shape of the terms, not about the existence / non-existence of some computable relations defined on those terms. The sad fact is that you can't even really determine whether a lambda-expression actually represents a function (i.e. whether it terminates for all inputs), and it is also known that there cannot be an algorithm that could determine whether two functions produce the same output for all inputs (i.e. are extensionally equal).
Is an object with methods a value? (for example IO { println("") })
Not quite clear what you're asking here. Objects don't have methods. Classes have methods. If you mean method invocations, then, no, they are terms that can be further simplified (by actually running the method), so they are not values.
Is an object with setter methods and mutable state a value?
Is an object with mutable state which works as a state machine a value?
There is no such thing in pure functional programming.

What constitutes a value in pure functional programming?
Background
In pure functional programming there is no mutation. Hence, code such as
case class C(x: Int)
val a = C(42)
val b = C(42)
would become equivalent to
case class C(x: Int)
val a = C(42)
val b = a
since, in pure functional programming, if a.x == b.x, then we would have a == b. That is, a == b would be implemented comparing the values inside.
However, Scala is not pure, since it allows mutation, like Java. In such case, we do NOT have the equivalence between the two snippets above, when we declare case class C(var x: Int). Indeed, performing a.x += 1 afterwords does not affect b.x in the first snippet, but does in the second one, where a and b point to the same object. In such case, it is useful to have a comparison a == b which compares the object references, rather than its inner integer value.
When using case class C(x: Int), Scala comparisons a == b behave closer to pure functional programming, comparing the integers values. With regular (non case) classes, Scala instead compares object references breaking the equivalence between the two snippets. But, again, Scala is not pure. By comparison, in Haskell
data C = C Int deriving (Eq)
a = C 42
b = C 42
is indeed equivalent to
data C = C Int deriving (Eq)
a = C 42
b = a
since there are no "references" or "object identities" in Haskell. Note that the Haskell implementation likely will allocate two "objects" in the first snippet, and only one object in the second one, but since there is no way to tell them apart inside Haskell, the program output will be the same.
Answer
Is a function a value ? (then what it means when equating two function: assert(f==g). For two function that is equivalent but defined separately => f!=g, why not they work like 1==1)
Yes, functions are values in pure functional programming.
Above, when you mention "function that is equivalent but defined separately", you are assuming that we can compare the "references" or "object identities" for these two functions. In pure functional programming we can not.
Pure functional programming should compare functions making f == g equivalent to f x == g x for all possible arguments x. This is feasible when there is only a few values for x, e.g. if f,g :: Bool -> Int we only need to check x=True, x=False. For functions having infinite domains, this is much harder. For instance, if f,g :: String -> Int we can not check infinitely many strings.
Theoretical computer science (computability theory) also proved that there is no algorithm to compare two functions String -> Int, not even an inefficient algorithm, not even if we have access to the source code of the two functions. For this mathematical reason, we must accept that functions are values that can not be compared. In Haskell, we express this through the Eq typeclass, stating that almost all the standard types are comparable, functions being the exception.
Is an object with methods a value ? (for example, IO{println("")})
Yes. Roughly speaking, "everything is a value", including IO actions.
Is an object with setter methods and mutable states a value ?
Is an object with mutable states and works as a state machine a value ?
There is no mutable state in pure functional programming.
At best, the setters can produce a "new" object with the modified fields.
And yes, the object would be a value.
How do we test if it is a value, is that immutable can be a sufficient condition to be a value ?
In pure functional programming, we can only have immutable data.
In impure functional programming, I think we can call most immutable objects "values", when we do not compare object references. If the "immutable" object contains a reference to a mutable object, e.g.
case class D(var x: Int)
case class C(c: C)
val a = C(D(42))
then things are more tricky. I guess we could still call a "immutable", since we can not alter a.c, but we should be careful since a.c.x can be mutated.
Depending on the intent, I think that some would not call a immutable. I would not consider a to be a value.
To make things more muddy, in impure programming, there are objects which use mutation to present a "pure" interface in an efficient way. For instance one can write a pure function that, before returning, stores its result in a cache. When called again on the same argument, it will return the previously computed result
(this is usually called memoization). Here, mutation happens, but it is not observable from outside, where at most we can observe a faster implementation. In this case, we can simply pretend the that function is pure (even if it performs mutation) and consider it a "value".

The contrast with imperative languages is stark. In inperitive languages, like Python, the output of a function is directed. It can be assigned to a variable, explicitly returned, printed or written to a file.
When I compose a function in Haskell, I never consider output. I never use "return" Everything has "a" value. This is called "symbolic" programming. By "everything", is meant "symbols". Like human language, the nouns and verbs represent something. That something is their value. The "value" of "Pete" is Pete. The name "Pete" is not Pete but is a representation of Pete, the person. The same is true of functional programming. The best analogy is math or logic When you do pages of calculations, do you direct the output of each function? You even "assign" variables to be replaced by their "value" in functions or expressions.

Values are
Immutable/Timeless
Anonymous
Semantically Transparent
What is the value of 42? 42. What is the "value" of new Date()? Date object at 0x3fa89c3. What is the identity of 42? 42. What is the identity of new Date()? As we saw in the previous example, it's the thing that lives at the place. It may have many different "values" in different contexts but it has only one identity. OTOH, 42 is sufficient unto itself. It's semantically meaningless to ask where 42 lives in the system. What is the semantic meaning of 42? Magnitude of 42. What is the semantic meaning of new Foo()? Who knows.
I would add a fourth criterion (see this in some contexts in the wild but not others) which is: values are language agnostic (I'm not certain the first 3 are sufficient to guarantee this nor that such a rule is entirely consistent with most people's intuition of what value means).

Values are things that
functions can take as inputs and return as outputs, that is, can be computed, and
are members of a type, that is, elements of some set, and
can be bound to a variable, that is, can be named.
First point is really the crucial test whether something is a value. Perhaps the word value, due to conditioning, might immediately make us think of just numbers, but the concept is very general. Essentially anything we can give to and get out of a function can be considered a value. Numbers, strings, booleans, instances of classes, functions themselves, predicates, and even types themselves, can be inputs and outputs of functions, and thus are values.
IO monad is a great example of how general this concept is. When we say IO monad models side-effects as values, we mean a function can take a side-effect (say println) as input and return as output. IO(println(...)) separates the idea of the effect of an action of println from the actual execution of an action, and allows these effects to be considered as first class values that can be computed with using the same language facilities as for any other values such as numbers.

Related

When should one use applicatives over monads?

I’ve been using Scala at work and to understand Functional Programming more deeply I picked Graham Hutton’s Programming in Haskell (love it :)
In the chapter on Monads I got my first look into the concept of Applicative Functors (AFs)
In my (limited) professional-Scala capacity I’ve never had to use AFs and have always written code that uses Monads. I’m trying to distill the understanding of “when to use AFs” and hence the question. Is this insight correct:
If all your computations are independent and parallelizable (i.e., the result of one doesn’t determine the output of another) your needs would be better served by an AF if the output needs to be piped to a pure function without effects. If however, you have even a single dependency AFs won’t help and you’ll be forced to use Monads. If the output needs to be piped to a function with effects (e.g., returning Maybe) you’ll need Monads.
For example, if you have “monadic” code like so:
val result = for {
x <- callServiceX(...)
y <- callServiceY(...) //not dependent on X
} yield f(x,y)
It’s better to do something like (pseudo-AF syntax for scala where |#| is like a separator between parallel/asynch calls).
val result = (callServiceX(...) |#| callServiceY(...)).f(_,_)
If f == pure and callService* are independent AFs will serve you better
If f has effects i.e., f(x,y): Option[Response] you’ll need Monads
If callServiceX(...), y <- callServiceY(...), callServiceZ(y) i.e., there is even a single dependency in the chain, use Monads.
Is my understanding correct? I know there’s a lot more to AFs/Monads and I believe I understand the advantages of one over the other (for the most part). What I want to know is the decision making process of deciding which one to use in a particular context.
There is not really a decision to be made here: always use the Applicative interface, unless it is too weak.1
It's the essential tension of abstraction strength: more computations can be expressed with Monad; computations expressed with Applicative can be used in more ways.
You seem to be mostly correct about the conditions where you need to use Monad. I'm not sure about this one:
If f has effects i.e. f(x,y) : Option[Response] you'll need Monads.
Not necessarily. What is the functor in question here? There is nothing stopping you from creating a F[Option[X]] if F is the applicative. But just as before you won't be able to make further decisions in F depending on whether the Option succeeded or not -- the whole "call tree" of F actions must be knowable without computing any values.
1 Readability concerns aside, that is. Monadic code will probably be more approachable to people from traditional backgrounds because of its imperative look.
I think you'll need to be a little cautious about terms like "independent" or "parallelizable" or "dependency". For example, in the IO monad, consider the computation:
foo :: IO (String, String)
foo = do
line1 <- getLine
line2 <- getLine
return (line1, line2)
The first and second lines are not independent or parallelizable in the usual sense. The second getLine's result is affected by the action of the first getLine through their shared external state (i.e., the first getLine reads a line, implying the second getLine will not read that same line but will rather read the next line). Nonetheless, this action is applicative:
foo = (,) <$> getLine <*> getLine
As a more realistic example, a monadic parser for the expression 3 + 4 might look like:
expr :: Parser Expr
expr = do
x <- factor
op <- operator
y <- factor
return $ x `op` y
The three actions here are interdependent. The success of the first factor parser determines whether or not the others will be run, and its behavior (e.g., how much of the input stream it absorbs) clearly affects the results of the other parsers. It would not be reasonable to consider these actions as operating "in parallel" or being "independent". Still, it's an applicative action:
expr = factor <**> operator <*> factor
Or, consider this State Int action:
bar :: Int -> Int -> State Int Int
bar x y = do
put (x + y)
z <- gets (2*)
return z
Clearly, the result of the gets (*2) action depends on the computation performed in the put (x + y) action. But, again, this is an applicative action:
bar x y = put (x + y) *> gets (2*)
I'm not sure that there's a really straightforward way of thinking about this intuitively. Roughly, if you think of a monadic action/computation m a as having "monadic structure" m as well as a "value structure" a, then applicatives keep the monadic and value structures separate. For example, the applicative computation:
λ> [(1+),(10+)] <*> [3,4,5]
[4,5,6,13,14,15]
has a monadic (list) structure whereby we always have:
[f,g] <*> [a,b,c] = [f a, f b, f c, g a, g b, g c]
regardless of the actual values involves. Therefore, the resulting list length is the product of the length of both "input" lists, the first element of the result involves the first elements of the "input" lists, etc. It also has a value structure whereby the value 4 in the result clearly depends on the value (1+) and the value 3 in the inputs.
A monadic computation, on the other hand, permits a dependency of the monadic structure on the value structure, so for example in:
quux :: [Int]
quux = do
n <- [1,2,3]
drop n [10..15]
we can't write down the structural list computation independent of the values. The list structure (e.g., the length of the final list) is dependent on the value level data (the actual values in the list [1,2,3]). This is the kind of dependency that requires a monad instead of an applicative.

Pure function in Scala in the context of class methods, and closures

What is the precise definition of pure function in Scala? Pure function has a definition, on wiki https://en.wikipedia.org/wiki/Pure_function . I think this definition is for a pure functional programming language.
However, I think it becomes complicated in the context of class method and closure.
class Ave(val a: Int, val b: Int) {
def ave = (a+b)/2
}
Is ave a pure function? I think it is, because it has no side effect and it depends only on the immutable state of the class. But this actually violates the pure function definition on wiki, which says pure function generally should not access non-local variable.
Similar question on closure:
def fcn(a:Int, b: Int): Unit = {
def ave = (a+b) / 2
}
To me, both ave are pure function, and equivalent to a "val" (Unified Access Principle).
But how to rigorously justify? Moreover, if the a and b fields are mutable, ave is no longer pure functional.
class Ave2(var a: Int, var b: Int) {
def ave = (a+b)/2 // Not pure functional
}
Another definition of pure function come from "Function programming in Scala" book:
An expression e is referentially transparent if, for all programs p,
all occurrences of e in p can be replaced by the result of evaluating
e without affecting the meaning of p. A function f is pure if the
expression f(x) is referentially transparent for all referentially
transparent x
Then the question is, for a class, is a mutable state in a class referentially transparent( var a, var b in my example)? (If it is, then the ave method in Ave2 becomes pure function, which is a contradiction)
What is the precise definition of a pure function in Scala?
Short answer:
Closures do not prevent purity as long as the closed-over variables are immutable.
Long answer:
In a truly functional programming language non-local variables will never change, so having a closure doesn't affect purity. Haskell functions are pure and yet Haskell uses closures without trouble. As long as your function obeys the rule of not having side-effects and having the same result for the same set of parameters every time it is invoked (in other words, not having mutable state), you get the referential transparency and thus purity. If you're nitpicking on the theoretical view on things, then I completely understand worrying about closures since pure functions should depend on their arguments and their arguments only (not necessarily all of them though). But from a practical viewpoint, closing over immutable variables is not considered to be violating purity.
Note that local mutable state doesn't necessarily endanger purity either. This is a slippery terrain, but Martin Odersky said himself one time (I can dig out the exact source if I really must, it was either a lecture on one of the Coursera courses or book Programming in Scala) that vars are ok as long as you keep them invisible to the outside world. So this silly function:
def addOne(i: Int) = {
var s = i
s = s + 1
s
}
can be considered pure even though it uses mutable state (variable s) because the mutable state is not exposed to the "outside world" and doesn't put the referential transparency of method addOne in danger.

The purpose of type classes in Haskell vs the purpose of traits in Scala

I am trying to understand how to think about type classes in Haskell versus traits in Scala.
My understanding is that type classes are primarily important at compile time in Haskell and not at runtime anymore, on the other hand traits in Scala are important both at compile time and run time. I want to illustrate this idea with a simple example, and I want to know if this viewpoint of mine is correct or not.
First, let us consider type classes in Haskell:
Let's take a simple example. The type class Eq.
For example, Int and Char are both instances of Eq. So it is possible to create a polymorphic List that is also an instance of Eq and can either contain Ints or Chars but not both in the same List.
My question is : is this the only reason why type classes exist in Haskell?
The same question in other words:
Type classes enable to create polymorphic types ( in this example a polymorphic List) that support operations that are defined in a given type class ( in this example the operation == defined in the type class Eq) but that is their only reason for existence, according to my understanding. Is this understanding of mine correct?
Is there any other reason why type classes exist in ( standard ) Haskell?
Is there any other use case in which type classes are useful in standard Haskell ? I cannot seem to find any.
Since Haskell's Lists are homogeneous, it is not possible to put Char and Int into the same list. So the usefulness of type classes, according to my understanding, is exhausted at compile time. Is this understanding of mine correct?
Now, let's consider the analogous List example in Scala:
Lets define a trait Eq with an equals method on it.
Now let's make Char and Int implement the trait Eq.
Now it is possible to create a List[Eq] in Scala that accepts both Chars and Ints into the same List ( Note that this - putting different type of elements into the same List - is not possible Haskell, at least not in standard Haskell 98 without extensions)!
In the case of the Haskell's List, the existence of type classes is important/useful only for type checking at compile time, according to my understanding.
In contrast, the existence of traits in Scala is important both at compile time for type checking and at run type for polymorphic dispatch on the actual runtime type of the object in the List when comparing two Lists for equality.
So, based on this simple example, I came to the conclusion that in Haskell type classes are primarily important/used at compilation time, in contrast, Scala's traits are important/used both at compile time and run time.
Is this conclusion of mine correct?
If not, why not ?
EDIT:
Scala code in response to n.m.'s comments:
case class MyInt(i:Int) {
override def equals(b:Any)= i == b.asInstanceOf[MyInt].i
}
case class MyChar(c:Char) {
override def equals(a:Any)= c==a.asInstanceOf[MyChar].c
}
object Test {
def main(args: Array[String]) {
val l1 = List(MyInt(1), MyInt(2), MyChar('a'), MyChar('b'))
val l2 = List(MyInt(1), MyInt(2), MyChar('a'), MyChar('b'))
val l3 = List(MyInt(1), MyInt(2), MyChar('a'), MyChar('c'))
println(l1==l1)
println(l1==l3)
}
}
This prints:
true
false
I will comment on the Haskell side.
Type classes bring restricted polymorphism in Haskell, wherein a type variable a can still be quantified universally, but ranges over only a subset of all the types -- namely, the types for which an instance of the type class is available.
Why restricted polymorphism is useful? A nice example would be the equality operator
(==) :: ?????
What its type should be? Intuitively, it takes two values of the same type and returns a boolean, so:
(==) :: a -> a -> Bool -- (1)
But the typing above is not entirely honest, since it allows one to apply == to any type a, including function types!
(\x :: Integer -> x + x) == (\x :: Integer -> 2*x)
The above would pass type checking if (1) were the typing for (==), since both arguments are of the same type a = (Integer -> Integer). However, we can not effectively compare two functions: well-known Computability results tell us that there is no algorithm to do that in general.
So, what we could do to implement (==)?
Option 1: at run time, if a function (or any other value involving functions -- such as a list of functions) is found to be passed to (==), raise an exception. This is what e.g. ML does. Typed programs can now "go wrong", despite checking types at compile time.
Option 2: introduce a new kind of polymorphism, restricting a to the function-free types. For instance, ww could have (==) :: forall-non-fun a. a -> a -> Bool so that comparing functions yields to a type error. Haskell exploits type classes to obtain exactly that.
So, Haskell type classes allow one to type (==) "honestly", ensuring no error at run time, and without being overly restrictive. Of course, the power of type classes goes far beyond of that but, at least in my own view, they primary purpose is to allow restricted polymorphism, in a very general and flexible way. Indeed, with type classes the programmer can define their own restrictions on the universal type quantifications.

Are polymorphic functions "restrictive" in Scala?

In the book Functional Programming in Scala MEAP v10, the author mentions
Polymorphic functions are often so constrained by their type that they only have one implementation!
and gives the example
def partial1[A,B,C](a: A, f: (A,B) => C): B => C = (b: B) => f(a, b)
What does he mean by this statement? Are polymorphic functions restrictive?
Here's a simpler example:
def mysteryMethod[A, B](somePair: (A, B)): B = ???
What does this method do? It turns out, that there is only one thing this method can do! You don't need the name of the method, you don't need the implementation of the method, you don't need any documentation. The type tells you everything it could possibly do, and it turns out that "everything" in this case is exactly one thing.
So, what does it do? It takes a pair (A, B) and returns some value of type B. What value does it return? Can it construct a value of type B? No, it can't, because it doesn't know what B is! Can it return a random value of type B? No, because randomness is a side-effect and thus would have to appear in the type signature. Can it go out in the universe and fetch some B? No, because that would be a side-effect and would have to appear in the type signature!
In fact, the only thing it can do is return the value of type B that was passed into it, the second element of the pair. So, this mysteryMethod is really the second method, and its only sensible implementation is:
def second[A, B](somePair: (A, B)): B = somePair._2
Note that in reality, since Scala is neither pure nor total, there are in fact a couple of other things the method could do: throw an exception (i.e. return abnormally), go into an infinite loop (i.e. not return at all), use reflection to figure out the actual type of B and reflectively invoke the constructor to fabricate a new value, etc.
However, assuming purity (the return value may only depend on the arguments), totality (the method must return a value normally) and parametricity (it really doesn't know anything about A and B), then there is in fact an awful lot you can tell about a method by only looking at its type.
Here's another example:
def mysteryMethod(someBoolean: Boolean): Boolean = ???
What could this do? It could always return false and ignore its argument. But then it would be overly constrained: if it always ignores its argument, then it doesn't care that it is a Boolean and its type would rather be
def alwaysFalse[A](something: A): Boolean = false // same for true, obviously
It could always just return its argument, but again, then it wouldn't actually care about booleans, and its type would rather be
def identity[A](something: A): A = something
So, really, the only thing it can do is return a different boolean than the one that was passed in, and since there are only two booleans, we know that our mysteryMethod is, in fact, not:
def not(someBoolean: Boolean): Boolean = if (someBoolean) false else true
So, here, we have an example, where the types don't give us the implementation, but at least, they give as a (small) set of 4 possible implementations, only one of which makes sense.
(By the way: it turns out that there is only one possible implementation of a method which takes an A and returns an A, and it is the identity method shown above.)
So, to recap:
purity means that you can only use the building blocks that were handed to you (the arguments)
a strong, strict, static type system means that you can only use those building blocks in such a way that their types line up
totality means that you can't do stupid things (like infinite loops or throwing exceptions)
parametricity means that you cannot make any assumptions at all about your type variables
Think about your arguments as parts of a machine and your types as connectors on those machine parts. There will only be a limited number of ways that you can connect those machine parts together in a way that you only plug together compatible connectors and you don't have any leftover parts. Often enough, there will be only one way, or if there are multiple ways, then often one will be obviously the right one.
What this means is that, once you have designed the types of your objects and methods, you won't even have to think about how to implement those methods, because the types will already dictate the only possible way to implement them! Considering how many questions on StackOverflow are basically "how do I implement this?", can you imagine how freeing it must be not having to think about that at all, because the types already dictate the one (or one of a few) possible implementation?
Now, look at the signature of the method in your question and try playing around with different ways to combine a and f in such a way that the types line up and you use both a and f and you will indeed see that there is only one way to do that. (As Chris and Paul have shown.)
def partial1[A,B,C](a: A, f: (A,B) => C): B => C = (b: B) => f(a, b)
Here, partial1 takes as parameters value of type A, and a function that takes a parameter of type A and a parameter of type B, returning a value of type C.
partial1 must return a function taking a value of type B and returning a C. Given A, B, and C are arbitary, we cannot apply any functions to their values. So the only possibility is to apply the function f to the value a passed to partial, and the value of type B that is a parameter to the function we return.
So you end up with the single possibility that's in the definition f(a,b)
To take a simpler example, consider the type Option[A] => Boolean. There's only a couple ways to implement this:
def foo1(x: Option[A]): Boolean = x match { case Some(_) => true
case None => false }
def foo2(x: Option[A]): Boolean = !foo1(x)
def foo3(x: Option[A]): Boolean = true
def foo4(x: Option[A]): Boolean = false
The first two choices are pretty much the same, and the last two are trivial, so essentially there's only one useful thing this function could do, which is tell you whether the Option is Some or None.
The space of possible implementation is "restricted" by the abstractness of the function type. Since A is unconstrained, the option's value could be anything, so the function can't depend on that value in any way because you know nothing about what it. The only "understanding" the function may have about its parameter is the structure of Option[_].
Now, back to your example. You have no idea what C is, so there's no way you can construct one yourself. Therefore the function you create is going to have to call f to get a C. And in order to call f, you need to provide an arguments of types A and B. Again, since there's no way to create an A or a B yourself, the only thing you can do is use the arguments that are given to you. So there's no other possible function you could write.

Partially applied functions with all arguments missing in Scala

From my understanding, a partially applied function are functions, which we can invoke without
passing all/some of the required arguments.
def add(x:Int, y:Int) = x + y
val paf = add(_ :Int, 3)
val paf1 = add(_ :Int, _ :Int)
In the above example, paf1 refers to partially applied function with all the arguments missing and I can invoke is using: paf1(10,20) and the original function can be invoked using add(10,20)
My question is, what is the extra benefit of creating a partially applied function with all the arguments missing, since the invocation syntax is pretty much the same? Is it just to convert methods into first class functions?
Scala's def keyword is how you define methods and methods are not functions (in Scala). So your add is not a first-class function entity the way your paf1 is, even if they're semantically equivalent in what they do with their arguments in producing a result.
Scala will automatically use partial application to turn a method into an equivalent function, which you can see by extending your example a bit:
def add(x: Int, y: Int) = x + y
...
val pa2: (Int, Int) => Int = add
pa2: (Int, Int) => Int = <function2>
This may seem of little benefit in this example, but in many cases there are non-explicit constraints indicating a function is required (or more accurately, constraints specified explicitly elsewhere) that allow you to simply give a (type-compatible) method name in a place where you need a function.
There is a difference between Methods and functions.
If you look at the declaration of List.map for example, it really expects a function. But the Scala compiler is smart enough to accept both methods and functions.
A quote from here
this trick ... for coercing a method into something where a function is expected, is so easy that even the compiler can detect and do it. In fact, this automatic coercion got an own name – it’s called Eta expansion.
On the other hand, have a look at Java 8; as far as I can tell, it's not that easy there.
Update: the question was, Why would I ever want an eta-expanded method? One of the great rhetorical strategies in the Scala bible is that they lead you to an example over many pages. I employ the Exodus metaphor below because I just saw "The Ten Commandments" with Charlton Heston. I'm not pretending that this answer is more explanatory than Randall's.
You might need someone with lesser rep to note that the big build-up in the bible's book of Exodus is to:
http://www.artima.com/pins1ed/first-steps-in-scala.html#step6
args foreach println.
The previous step among the "first steps" is, indeed,
args foreach (arg => println(arg))
but I'm guessing no one does it that way if the type inference gods are kind.
From the change log in the spec: "a partially unapplied method is now designated m _ instead
of the previous notation &m." That is, at a certain point, the notion of a "function ptr" became a partial function with no args supplied. Which is what it is. Update: "metaphorically."
I'm sure others can synthesize a bunch of use-cases for this, but really it's just a consequence of the fact that functions are values. It would be much weirder if you couldn't pass a function around as a normal value.
Scala's handling of this stuff is a little clunky. Partial application is usually combined with currying. I consider it a quirk that you can basically eta-expand any expression using _. What you're effectively doing (modulo Scala's syntactic quirks around currying) by writing add(_ : Int, _ : Int) is writing (x : Int) => (y : Int) => add(x, y). I'm sure you can think of instances where the latter definition might be useful in a program.