I am trying to write a function in Scala that will compute the partial derivative of a function with arbitrary many variables. For example
One Variable(regular derivative):
def partialDerivative(f: Double => Double)(x: Double) = { (f(x+0.001)-f(x))/0.001 }
Two Variables:
def partialDerivative(c: Char, f: (Double, Double) => Double)(x: Double)(y: Double) = {
if (c == 'x') (f(x+0.0001, y)-f(x, y))/0.0001
else if (c == 'y') (f(x, y+0.0001)-f(x, y))/0.0001
}
I am wondering if there is a way to write partialDerivative where the number of variables in f do not need to be known in advance.
I read some blog posts about varargs but can't seem to come up with the correct signature.
Here is what I tried.
def func(f: (Double*) => Double)(n: Double*)
but this doesn't seem to be correct. Thanks for any help on this.
Double* means f accepts an arbitrary Seq of Doubles, which is not correct.
The only way I can think of to write something like this is using shapeless Sized. You will need more implicits than this, and possibly some type-level equality implicits as well; type-level programming in scala is quite complex and I don't have the time to debug this properly, but it should give you some idea:
def partialDerivative[N <: Nat, I <: Nat](f: Sized[Seq[Double], N] => Double)(i: I, xs: Sized[Seq[Double], N])(implicit diff: Diff[I, N]) = {
val (before, atAndAfter) = xs.splitAt(i)
val incrementedAtAndAfter = (atAndAfter.head + 0.0001) +: atAndAfter.tail
val incremented = before ++ incrementedAtAndAfter
(f(incremeted) - f(xs)) / 0.0001
}
Related
Whilst I understand what a partially applied/curried function is, I still don't fully understand why I would use such a function vs simply overloading a function. I.e. given:
def add(a: Int, b: Int): Int = a + b
val addV = (a: Int, b: Int) => a + b
What is the practical difference between
def addOne(b: Int): Int = add(1, b)
and
def addOnePA = add(1, _:Int)
// or currying
val addOneC = addV.curried(1)
Please note I am NOT asking about currying vs partially applied functions as this has been asked before and I have read the answers. I am asking about currying/partially applied functions VS overloaded functions
The difference in your example is that overloaded function will have hardcoded value 1 for the first argument to add, i.e. set at compile time, while partially applied or curried functions are meant to capture their arguments dynamically, i.e. at run time. Otherwise, in your particular example, because you are hardcoding 1 in both cases it's pretty much the same thing.
You would use partially applied/curried function when you pass it through different contexts, and it captures/fills-in arguments dynamically until it's completely ready to be evaluated. In FP this is important because many times you don't pass values, but rather pass functions around. It allows for higher composability and code reusability.
There's a couple reasons why you might prefer partially applied functions. The most obvious and perhaps superficial one is that you don't have to write out intermediate functions such as addOnePA.
List(1, 2, 3, 4) map (_ + 3) // List(4, 5, 6, 7)
is nicer than
def add3(x: Int): Int = x + 3
List(1, 2, 3, 4) map add3
Even the anonymous function approach (that the underscore ends up expanding out to by the compiler) feels a tiny bit clunky in comparison.
List(1, 2, 3, 4) map (x => x + 3)
Less superficially, partial application comes in handy when you're truly passing around functions as first-class values.
val fs = List[(Int, Int) => Int](_ + _, _ * _, _ / _)
val on3 = fs map (f => f(_, 3)) // partial application
val allTogether = on3.foldLeft{identity[Int] _}{_ compose _}
allTogether(6) // (6 / 3) * 3 + 3 = 9
Imagine if I hadn't told you what the functions in fs were. The trick of coming up with named function equivalents instead of partial application becomes harder to use.
As for currying, currying functions often lets you naturally express transformations of functions that produce other functions (rather than a higher order function that simply produces a non-function value at the end) which might otherwise be less clear.
For example,
def integrate(f: Double => Double, delta: Double = 0.01)(x: Double): Double = {
val domain = Range.Double(0.0, x, delta)
domain.foldLeft(0.0){case (acc, a) => delta * f(a) + acc
}
can be thought of and used in the way that you actually learned integration in calculus, namely as a transformation of a function that produces another function.
def square(x: Double): Double = x * x
// Ignoring issues of numerical stability for the moment...
// The underscore is really just a wart that Scala requires to bind it to a val
val cubic = integrate(square) _
val quartic = integrate(cubic) _
val quintic = integrate(quartic) _
// Not *utterly* horrible for a two line numerical integration function
cubic(1) // 0.32835000000000014
quartic(1) // 0.0800415
quintic(1) // 0.015449626499999999
Currying also alleviates a few of the problems around fixed function arity.
implicit class LiftedApply[A, B](fOpt: Option[A => B]){
def ap(xOpt: Option[A]): Option[B] = for {
f <- fOpt
x <- xOpt
} yield f(x)
}
def not(x: Boolean): Boolean = !x
def and(x: Boolean)(y: Boolean): Boolean = x && y
def and3(x: Boolean)(y: Boolean)(z: Boolean): Boolean = x && y && z
Some(not _) ap Some(false) // true
Some(and _) ap Some(true) ap Some(true) // true
Some(and3 _) ap Some(true) ap Some(true) ap Some(true) // true
By having curried functions, we've been able to "lift" a function to work on Option for as many arguments as we need. If our logic functions had not been curried, then we would have had to have separate functions to lift A => B to Option[A] => Option[B], (A, B) => C to (Option[A], Option[B]) => Option[C], (A, B, C) => D to (Option[A], Option[B], Option[C]) => Option[D] and so on for all the arities we cared about.
Currying also has some other miscellaneous benefits when it comes to type inference and is required if you have both implicit and non-implicit arguments for a method.
Finally, the answers to this question list out some more times you might want currying.
I am going through lectures from excellent Martin Odersky's FP course and one of the lectures demonstrates higher-order functions through Newton's method for finding fixed points of certain functions. There is a cruicial step in the lecture where I think type signature is being violated so I would ask for an explanation. (Apologies for the long intro that's inbound - it felt it was needed.)
One way of implementing such an algorithm is given like this:
val tolerance = 0.0001
def isCloseEnough(x: Double, y: Double) = abs((x - y) / x) / x < tolerance
def fixedPoint(f: Double => Double)(firstGuess: Double) = {
def iterate(guess: Double): Double = {
val next = f(guess)
if (isCloseEnough(guess, next)) next
else iterate(next)
}
iterate(firstGuess)
}
Next, we try to compute the square root via fixedPoint function, but the naive attempt through
def sqrt(x: Double) = fixedPoint(y => x / y)(1)
is foiled because such an approach oscillates (so, for sqrt(2), the result would alternate indefinitely between 1.0 and 2.0).
To deal with that, we introduce average damping, so that essentially we compute the mean of two nearest calculated values and converge to solution, therefore
def sqrt(x: Double) = fixedPoint(y => (y + x / y) / 2)(1)
Finally, we introduce averageDamp function and the task is to write sqrt with fixedPoint and averageDamp. The averageDamp is defined as follows:
def averageDamp(f: Double => Double)(x: Double) = (x + f(x)) / 2
Here comes the part I don't understand - my initial solution was this:
def sqrt(x: Double) = fixedPoint(z => averageDamp(y => x / y)(z))(1)
but prof. Odersky's solution was more concise:
def sqrt(x: Double) = fixedPoint(averageDamp(y => x / y))(1)
My question is - why does it work? According to function signature, the fixedPoint function is supposed to take a function (Double => Double) but it doesn't mind being passed an ordinary Double (which is what averageDamp returns - in fact, if you try to explicitly specify the return type of Double to averageDamp, the compiler won't throw an error).
I think that my approach follows types correctly - so what am I missing here? Where is it specified or implied(?) that averageDamp returns a function, especially given the right-hand side is clearly returning a scalar? How can you pass a scalar to a function that clearly expects functions only? How do you reason about code that seems to not honour type signatures?
Your solution is correct, but it can be more concise.
Let's scrutinize the averageDamp function more closely.
def averageDamp(f: Double => Double)(x: Double): Double = (x + f(x)) / 2
The return type annotation is added to make it more clearly. I think what you are missing is here:
but it doesn't mind being passed an ordinary Double (which is what averageDamp returns - in fact, if you try to explicitly specify the
return type of Double to averageDamp, the compiler won't throw an
error).
But averageDamp(y => y/x) does return a Double => Double function! averageDamp requires to be passed TWO argument lists to return a Double.
If the function receive just one argument, it still wants the other one to be completed. So rather than returning the result immediately, it returns a function, saying that "I still need an argument here, feed me that so I will return what you want".
Prof MO did pass ONE function argument to it, not two, so averageDamp is partially applied, in the sense that it returns a Double => Double function.
The course will also tell you functions with multiple argument lists are syntactical sugar form of this:
def f(arg1)(arg2)(arg3)...(argN-1)(argN) = (argN) => f(arg1)(arg2)(arg3)...(argN-1)
If you give one less argument than f needs, it just return the right side of equation, that is, a function. So, heeding that averageDamp(y => x / y), the argument passed to fixPoint, is actually a function should help you understand the question.
Notice: There is some difference between partially applied function(or function currying) and multiple argument list function
For example you cannot declare like this
val a = averageDamp(y => y/2)
The compiler will complain about this as 'method is not a partially applied function'.
The difference is explained here: What's the difference between multiple parameters lists and multiple parameters per list in Scala?.
Multiple parameter lists are syntactic sugar for a function that returns another function. You can see this in the scala shell:
scala> :t averageDamp _
(Double => Double) => (Double => Double)
We can write the same function without the syntactic sugar - this is the way we'd do it in e.g. Python:
def averageDamp(f: Double => Double): (Double => Double) = {
def g(x: Double): Double = (x + f(x)) / 2
g
}
Returning a function can look a bit weird to start with, but it's complementary to passing a function as an argument and enables some very powerful programming techniques. Functions are just another type of value, like Int or String.
In your original solution you were reusing the variable name y, which I think makes it slightly confusing; we can translate what you've written into:
def sqrt(x: Double) = fixedPoint(z => averageDamp(y => x / y)(z))(1)
With this form, you can hopefully see the pattern:
def sqrt(x: Double) = fixedPoint(z => something(z))(1)
And hopefully it's now obvious that this is the same as:
def sqrt(x: Double) = fixedPoint(something)(1)
which is Odersky's version.
I have code that stores matrices of different types, e.g. m1: Array[Array[Double]], m2: List[List[Int]]. As seen, these matrices are all stored as as a sequence of rows. Any row is easy to retrieve but columns seem to me to require traversal of the matrix. I'd like to write a very generic function that returns a column from a matrix of either of these types. I've written this in many ways, the latest of which is:
/* Get a column of any matrix stored in rows */
private def column(M: Seq[Seq[Any]], n: Int, c: Seq[Any] = List(),
i: Int = 0): List[Any] = {
if (i != M.size) column(M, n, c :+ M(i)(n), i+1) else c.toList
This compiles however it doesn't work: I get a type mismatch when I try to pass in an Array[Array[Double]]. I've tried to write this with some view bounds as well i.e.
private def column[T1 <% Seq[Any], T2 <% Seq[T1]] ...
But this wasn't fruitful either. How come the first code segment I wrote doesn't work? What is the best way to do this?
import collection.generic.CanBuildFrom
def column[T, M[_]](xss: M[M[T]], c: Int)(
implicit cbf: CanBuildFrom[Nothing, T, M[T]],
mm2s: M[M[T]] => Seq[M[T]],
m2s: M[T] => Seq[T]
): M[T] = {
val bf = cbf()
for (xs <- mm2s(xss)) { bf += m2s(xs).apply(c) }
bf.result
}
If you don't care about the return type, this is a really simple way to do it:
def column[A, M[_]](matrix: M[M[A]], colIdx: Int)
(implicit v1: M[M[A]] => Seq[M[A]], v2: M[A] => Seq[A]): Seq[A] =
matrix.map(_(colIdx))
I suggest you represent a Matrix as an underlying single-dimensional Array (the only kind of Array there is!) and separately represent its structure in terms of rows and columns.
This gives you more flexibility both in representation and access. E.g., you can provide both row-major and column-major organizations. Producing row iterators is just as easy as producing column iterators, regardless of whether it's a row-major or column-major organization.
Try this one:
private def column[T](
M: Seq[Seq[T]], n: Int, c: Seq[T] = List(), i: Int = 0): List[T] =
if (i != M.size) column(M, n, c :+ M(i)(n), i+1) else c.toList
Not sure how to properly formulate the question, there is a problem with currying in the merge sort example from Scala by Example book on page 69. The function is defined as follows:
def msort[A](less: (A, A) => Boolean)(xs: List[A]): List[A] = {
def merge(xs1: List[A], xs2: List[A]): List[A] =
if (xs1.isEmpty) xs2
else if (xs2.isEmpty) xs1
else if (less(xs1.head, xs2.head)) xs1.head :: merge(xs1.tail, xs2)
else xs2.head :: merge(xs1, xs2.tail)
val n = xs.length/2
if (n == 0) xs
else merge(msort(less)(xs take n), msort(less)(xs drop n))
}
and then there is an example of how to create other functions from it by currying:
val intSort = msort((x : Int, y : Int) => x < y)
val reverseSort = msort((x:Int, y:Int) => x > y)
however these two lines give me errors about insufficient number of arguments. And if I do like this:
val intSort = msort((x : Int, y : Int) => x < y)(List(1, 2, 4))
val reverseSort = msort((x:Int, y:Int) => x > y)(List(4, 3, 2))
it WILL work. Why? Can someone explain? Looks like the book is really dated since it is not the first case of such an inconsistance in its examples. Could anyone point to something more real to read? (better a free e-book).
My compiler (2.9.1) agrees, there seems to be an error here, but the compiler does tell you what to do:
error: missing arguments for method msort in object $iw;
follow this method with `_' if you want to treat it as a partially applied function
So, this works:
val intSort = msort((x : Int, y : Int) => x < y) _
Since the type of intSort is inferred, the compiler doesn't otherwise seem to know whether you intended to partially apply, or whether you missed arguments.
The _ can be omitted when the compiler can infer from the expected type that a partially applied function is what is intended. So this works too:
val intSort: List[Int] => List[Int] = msort((x: Int, y: Int) => x < y)
That's obviously more verbose, but more often you will take advantage of this without any extra boilerplate, for example if msort((x: Int, y: Int) => x < y) were the argument to a function where the parameter type is already known to be List[Int] => List[Int].
Edit: Page 181 of the current edition of the Scala Language Specification mentions a tightening of rules for implicit conversions of partially applied methods to functions since Scala 2.0. There is an example of invalid code very similar to the one in Scala by Example, and it is described as "previously legal code".
The square of the hypotenuse of a right triangle is equal to the sum of the squares on the other two sides.
This is Pythagoras's Theorem. A function to calculate the hypotenuse based on the length "a" and "b" of it's sides would return sqrt(a * a + b * b).
The question is, how would you define such a function in Scala in such a way that it could be used with any type implementing the appropriate methods?
For context, imagine a whole library of math theorems you want to use with Int, Double, Int-Rational, Double-Rational, BigInt or BigInt-Rational types depending on what you are doing, and the speed, precision, accuracy and range requirements.
This only works on Scala 2.8, but it does work:
scala> def pythagoras[T](a: T, b: T, sqrt: T => T)(implicit n: Numeric[T]) = {
| import n.mkNumericOps
| sqrt(a*a + b*b)
| }
pythagoras: [T](a: T,b: T,sqrt: (T) => T)(implicit n: Numeric[T])T
scala> def intSqrt(n: Int) = Math.sqrt(n).toInt
intSqrt: (n: Int)Int
scala> pythagoras(3,4, intSqrt)
res0: Int = 5
More generally speaking, the trait Numeric is effectively a reference on how to solve this type of problem. See also Ordering.
The most obvious way:
type Num = {
def +(a: Num): Num
def *(a: Num): Num
}
def pyth[A <: Num](a: A, b: A)(sqrt: A=>A) = sqrt(a * a + b * b)
// usage
pyth(3, 4)(Math.sqrt)
This is horrible for many reasons. First, we have the problem of the recursive type, Num. This is only allowed if you compile this code with the -Xrecursive option set to some integer value (5 is probably more than sufficient for numbers). Second, the type Num is structural, which means that any usage of the members it defines will be compiled into corresponding reflective invocations. Putting it mildly, this version of pyth is obscenely inefficient, running on the order of several hundred thousand times slower than a conventional implementation. There's no way around the structural type though if you want to define pyth for any type which defines +, * and for which there exists a sqrt function.
Finally, we come to the most fundamental issue: it's over-complicated. Why bother implementing the function in this way? Practically speaking, the only types it will ever need to apply to are real Scala numbers. Thus, it's easiest just to do the following:
def pyth(a: Double, b: Double) = Math.sqrt(a * a + b * b)
All problems solved! This function is usable on values of type Double, Int, Float, even odd ones like Short thanks to the marvels of implicit conversion. While it is true that this function is technically less flexible than our structurally-typed version, it is vastly more efficient and eminently more readable. We may have lost the ability to calculate the Pythagrean theorem for unforeseen types defining + and *, but I don't think you're going to miss that ability.
Some thoughts on Daniel's answer:
I've experimented to generalize Numeric to Real, which would be more appropriate for this function to provide the sqrt function. This would result in:
def pythagoras[T](a: T, b: T)(implicit n: Real[T]) = {
import n.mkNumericOps
(a*a + b*b).sqrt
}
It is tricky, but possible, to use literal numbers in such generic functions.
def pythagoras[T](a: T, b: T)(sqrt: (T => T))(implicit n: Numeric[T]) = {
import n.mkNumericOps
implicit val fromInt = n.fromInt _
//1 * sqrt(a*a + b*b) Not Possible!
sqrt(a*a + b*b) * 1 // Possible
}
Type inference works better if the sqrt is passed in a second parameter list.
Parameters a and b would be passed as Objects, but #specialized could fix this. Unfortuantely there will still be some overhead in the math operations.
You can almost do without the import of mkNumericOps. I got frustratringly close!
There is a method in java.lang.Math:
public static double hypot (double x, double y)
for which the javadocs asserts:
Returns sqrt(x2 +y2) without intermediate overflow or underflow.
looking into src.zip, Math.hypot uses StrictMath, which is a native Method:
public static native double hypot(double x, double y);