Precedence of the operators & and | in Scala - scala

In the book "Programming in Scala" (Martin Odersky, 2nd edition) they give this operator precedence table (not complete here):
* / %
+ -
:
= !
< >
&
^
|
So that if the first character of an operator has a higher position in this table that the first character of another operator, the former operator is evaluated first.
According to that this code should print out yy, but it prints out x:
def x() = { print('x'); true }
def y() = { print('y'); true }
x || y && y // prints `x` but should `yy`
My understanding is that if & is higher in the table that |, it must be evaluated first. It is like * has precedence over +, so in x + y * y, the last statement is evalueted first.
EDIT:
Also look at this code
def x() = { print('x'); 1 }
def y() = { print('y'); 3 }
x == x + y * y // xxyy
Look like it evaluates them from left to right but "solves" them according to the table.

Raw version:
x || y && y
With precedence applied:
x || (y && y)
(Note, if the precedence was reversed it would be (x || y) && y.)
Now, you are expecting (y && y) to get evaluated before x, but Scala always evaluates left-to-right (see §6.6 of the language spec). And, as others have mentioned, || is a short-circuiting operator, so the the second operand is not even evaluated if the first operand returns true.
Another way to think of it is as a two method calls, where the second operand of both is pass-by-name:
or (x, and(y, y))
def or(a: Boolean, b: => Boolean): Boolean = if (a) true else b
def and(a: Boolean, b: => Boolean): Boolean = if (!a) false else b
Under the left-to-right evaluation model, x is ALWAYS evaluated first, then maybe y twice.
If you haven't already done so, you could follow Martin Odersky's functional programming course on Coursera where he talks about this very subject in lecture 1 or 2.
Your second example is equivalent to
add(x, mult(y, y))
def add(a: Int, b: Int) = a + b
def mult(a: Int, b: Int) = a * b
x is always evaluated first, then y, twice.

It prints x because x() call returns true and in case of || logic operator if left part return true, the right part is not computed. To compute it use | then, even if left part is true the right part will be evaluated
Updated
Example with boolean is not good, because in case with booleans so called "short-circuit" evaluation is used and scalac won't even look at the second part of or expression if the left part is true. Think of this operation like:
def || (a: => Boolean) = ???

Related

Scala: AnyVal usage

I am setting a val in Scala using an if statement, I only want to set the val when certain criteria is met and I would like it to be an Int. I use/compare the val later on with another Int which is (currently) always positive:
val transpose = if(x < y) 10 else -1
...
if(transpose > min) doSomething
I do not like this code because, in the future, min may in fact be negative.
I have changed the if statement to:
val transpose = if(x < y) 10
This returns a type AnyVal. I am wondering how I can utilise this? I still wish to compare the value held within the AnyVal with min but only if it is an Int, otherwise, I want to continue as if the if statement was unsuccessful. In pseudo-code:
if(transpose instanceOf(Int) && transpose > min) doSomething
I have toyed with using transpose.getClass but it just seems like the wrong way to do it.
Either use an Option, as suggested by Ende Neu:
val transpose = if(x < y) Some(10) else None
if(transpose.exists(_ > min)) doSomething
Or, just use Int.MinValue:
val transpose = if(x < y) 10 else Int.MinValue
if(transpose > min) doSomething // won't doSometihng for any "min" if x < y
Another good option is to make transpose a function of type Int => Boolean (assuming it's only used for this if statement), thus not needing a type to represent the threshold:
val transpose: Int => Boolean = m => if (x < y) 10 > m else false
if(transpose(min)) doSomething

Function parameters evaluation in Scala (functional programming)

please find below a piece of code from Coursera online course (lecture 2.3) on functional programming in Scala.
package week2
import math.abs
object lecture2_3_next {
def fixedPoint(f: Double => Double)(firstGuess: Double): Double = {
val tolerance = 0.0001
def isCloseEnough(x: Double, y: Double): Boolean = abs((x - y) / x) / x < tolerance
def iterate(guess: Double): Double = {
val next = f(guess)
if (isCloseEnough(guess, next)) next
else iterate(next)
}
iterate(firstGuess)
}
def averageDamp(f: Double => Double)(x: Double): Double = (x + f(x)) / 2
def sqrt(x: Double): Double = fixedPoint(averageDamp(y => x / y))(1)
sqrt(2)
}
A few points blocked me while I'm trying to understand this piece of code.
I'd like your help to understanding this code.
The 2 points that annoying me are :
- when you call averageDamp, there are 2 parameters 'x' and 'y' in the function passed (eg. averageDamp(y => x / y)) but you never specify the 'y' parameter in the definition of the averageDamp function (eg. def averageDamp(f: Double => Double)(x: Double): Double = (x + f(x)) / 2). Where and how do the scala compiler evaluate the 'y' parameter.
- second point may be related to the first, I don't know in fact. When I call the averageDamp function, I pass only the function 'f' parameter (eg. y => x / y) but I don't pass the second parameter of the function which is 'x' (eg. (x: Double) second parameter). How the scala compiler is evaluating the 'x' parameter in this case to render the result of the averageDamp call.
I think I missed something about the evaluation or substitution model of scala and functional programming.
Thank's for your help and happy new year !
Hervé
1) You don't pass an x and an y parameter as f, you pass a function. The function is defined as y => x / y, where y is just a placeholder for the argument of this function, while x is a fixed value in this context, as it is given as argument for the sqrt method (in the example x is 2). Instead of the fancy lambda-syntax, you could write as well
def sqrt(x: Double): Double = fixedPoint(averageDamp(
new Function1[Double,Double] {
def apply(y:Double):Double = x / y
}
))(1)
Nothing magic about this, just an abbreviation.
2) When you have a second parameter list, and don't use it when calling the method, you do something called "currying", and you get back a partial function. Consider
def add(x:Int)(y:Int) = x + y
If you call it as add(2)(3), everything is "normal", and you get back 5. But if you call add(2), the second argument is still "missing", and you get back a function expecting this missing second argument, so you have something like y => 2 + y
The x is not a parameter of the (anonymous) function, it is a parameter of the function sqrt. For the anonymous function it is a bound closure.
To make it more obvious, let's rewrite it and use a named instead of an anonymous function:
def sqrt(x: Double): Double = fixedPoint(averageDamp(y => x / y))(1)
will can be rewritten as this:
def sqrt(x: Double): Double = {
def funcForSqrt(y: Double) : Double = x / y // Note that x is not a parameter of funcForSqrt
// Use the function fundForSqrt as a parameter of averageDamp
fixedPoint(averageDamp(funcForSqrt))(1)
}

Tail Recursion in Scala with Base case as zero

Suppose I have this code in scala :
def factorial(accumulator: Int, x: Int) : Int = {
if(x == 1)
return accumulator
factorial(x * accumulator, x - 1)
}
println(factorial(1,0))
And the Output :
0
Now I have two questions :
1) Isn't the definition of this function fundamentally wrong? ( will not give the right answer for zero) I could always wrap this function inside another function and treat zero as special case returning 1 but that does not feel right and in tune with the formal definition.
2) Also why I am returned 0 as the answer in the first place? Why doesn't the code get stuck in an infinite loop?
def factorial(x: Int): Int = {
#annotation.tailrec
def factorial(accumulator: Int, x: Int): Int = {
if (x <= 0)
accumulator
else
factorial(x * accumulator, x - 1)
}
assert(x >= 0,"""argument should be "non-negative integer" """)
factorial(1, x)
}
You should not give user possibility to call factorial in wrong way. So your function should be internal
factorial(0) = 1
Yes, you should hide the accumulator and make it an argument of an internal, tailrec function. The special case for zero should also be handled explicitly, there is nothing 'against formal factorial definition' with it.
It works because integer exceeds the maximum negative value.

Convert normal recursion to tail recursion

I was wondering if there is some general method to convert a "normal" recursion with foo(...) + foo(...) as the last call to a tail-recursion.
For example (scala):
def pascal(c: Int, r: Int): Int = {
if (c == 0 || c == r) 1
else pascal(c - 1, r - 1) + pascal(c, r - 1)
}
A general solution for functional languages to convert recursive function to a tail-call equivalent:
A simple way is to wrap the non tail-recursive function in the Trampoline monad.
def pascalM(c: Int, r: Int): Trampoline[Int] = {
if (c == 0 || c == r) Trampoline.done(1)
else for {
a <- Trampoline.suspend(pascal(c - 1, r - 1))
b <- Trampoline.suspend(pascal(c, r - 1))
} yield a + b
}
val pascal = pascalM(10, 5).run
So the pascal function is not a recursive function anymore. However, the Trampoline monad is a nested structure of the computation that need to be done. Finally, run is a tail-recursive function that walks through the tree-like structure, interpreting it, and finally at the base case returns the value.
A paper from Rúnar Bjanarson on the subject of Trampolines: Stackless Scala With Free Monads
In cases where there is a simple modification to the value of a recursive call, that operation can be moved to the front of the recursive function. The classic example of this is Tail recursion modulo cons, where a simple recursive function in this form:
def recur[A](...):List[A] = {
...
x :: recur(...)
}
which is not tail recursive, is transformed into
def recur[A]{...): List[A] = {
def consRecur(..., consA: A): List[A] = {
consA :: ...
...
consrecur(..., ...)
}
...
consrecur(...,...)
}
Alexlv's example is a variant of this.
This is such a well known situation that some compilers (I know of Prolog and Scheme examples but Scalac does not do this) can detect simple cases and perform this optimisation automatically.
Problems combining multiple calls to recursive functions have no such simple solution. TMRC optimisatin is useless, as you are simply moving the first recursive call to another non-tail position. The only way to reach a tail-recursive solution is remove all but one of the recursive calls; how to do this is entirely context dependent but requires finding an entirely different approach to solving the problem.
As it happens, in some ways your example is similar to the classic Fibonnaci sequence problem; in that case the naive but elegant doubly-recursive solution can be replaced by one which loops forward from the 0th number.
def fib (n: Long): Long = n match {
case 0 | 1 => n
case _ => fib( n - 2) + fib( n - 1 )
}
def fib (n: Long): Long = {
def loop(current: Long, next: => Long, iteration: Long): Long = {
if (n == iteration)
current
else
loop(next, current + next, iteration + 1)
}
loop(0, 1, 0)
}
For the Fibonnaci sequence, this is the most efficient approach (a streams based solution is just a different expression of this solution that can cache results for subsequent calls). Now,
you can also solve your problem by looping forward from c0/r0 (well, c0/r2) and calculating each row in sequence - the difference being that you need to cache the entire previous row. So while this has a similarity to fib, it differs dramatically in the specifics and is also significantly less efficient than your original, doubly-recursive solution.
Here's an approach for your pascal triangle example which can calculate pascal(30,60) efficiently:
def pascal(column: Long, row: Long):Long = {
type Point = (Long, Long)
type Points = List[Point]
type Triangle = Map[Point,Long]
def above(p: Point) = (p._1, p._2 - 1)
def aboveLeft(p: Point) = (p._1 - 1, p._2 - 1)
def find(ps: Points, t: Triangle): Long = ps match {
// Found the ultimate goal
case (p :: Nil) if t contains p => t(p)
// Found an intermediate point: pop the stack and carry on
case (p :: rest) if t contains p => find(rest, t)
// Hit a triangle edge, add it to the triangle
case ((c, r) :: _) if (c == 0) || (c == r) => find(ps, t + ((c,r) -> 1))
// Triangle contains (c - 1, r - 1)...
case (p :: _) if t contains aboveLeft(p) => if (t contains above(p))
// And it contains (c, r - 1)! Add to the triangle
find(ps, t + (p -> (t(aboveLeft(p)) + t(above(p)))))
else
// Does not contain(c, r -1). So find that
find(above(p) :: ps, t)
// If we get here, we don't have (c - 1, r - 1). Find that.
case (p :: _) => find(aboveLeft(p) :: ps, t)
}
require(column >= 0 && row >= 0 && column <= row)
(column, row) match {
case (c, r) if (c == 0) || (c == r) => 1
case p => find(List(p), Map())
}
}
It's efficient, but I think it shows how ugly complex recursive solutions can become as you deform them to become tail recursive. At this point, it may be worth moving to a different model entirely. Continuations or monadic gymnastics might be better.
You want a generic way to transform your function. There isn't one. There are helpful approaches, that's all.
I don't know how theoretical this question is, but a recursive implementation won't be efficient even with tail-recursion. Try computing pascal(30, 60), for example. I don't think you'll get a stack overflow, but be prepared to take a long coffee break.
Instead, consider using a Stream or memoization:
val pascal: Stream[Stream[Long]] =
(Stream(1L)
#:: (Stream from 1 map { i =>
// compute row i
(1L
#:: (pascal(i-1) // take the previous row
sliding 2 // and add adjacent values pairwise
collect { case Stream(a,b) => a + b }).toStream
++ Stream(1L))
}))
The accumulator approach
def pascal(c: Int, r: Int): Int = {
def pascalAcc(acc:Int, leftover: List[(Int, Int)]):Int = {
if (leftover.isEmpty) acc
else {
val (c1, r1) = leftover.head
// Edge.
if (c1 == 0 || c1 == r1) pascalAcc(acc + 1, leftover.tail)
// Safe checks.
else if (c1 < 0 || r1 < 0 || c1 > r1) pascalAcc(acc, leftover.tail)
// Add 2 other points to accumulator.
else pascalAcc(acc, (c1 , r1 - 1) :: ((c1 - 1, r1 - 1) :: leftover.tail ))
}
}
pascalAcc(0, List ((c,r) ))
}
It does not overflow the stack but as on big row and column but Aaron mentioned it's not fast.
Yes it's possible. Usually it's done with accumulator pattern through some internally defined function, which has one additional argument with so called accumulator logic, example with counting length of a list.
For example normal recursive version would look like this:
def length[A](xs: List[A]): Int = if (xs.isEmpty) 0 else 1 + length(xs.tail)
that's not a tail recursive version, in order to eliminate last addition operation we have to accumulate values while somehow, for example with accumulator pattern:
def length[A](xs: List[A]) = {
def inner(ys: List[A], acc: Int): Int = {
if (ys.isEmpty) acc else inner(ys.tail, acc + 1)
}
inner(xs, 0)
}
a bit longer to code, but i think the idea i clear. Of cause you can do it without inner function, but in such case you should provide acc initial value manually.
I'm pretty sure it's not possible in the simple way you're looking for the general case, but it would depend on how elaborate you permit the changes to be.
A tail-recursive function must be re-writable as a while-loop, but try implementing for example a Fractal Tree using while-loops. It's possble, but you need to use an array or collection to store the state for each point, which susbstitutes for the data otherwise stored in the call-stack.
It's also possible to use trampolining.
It is indeed possible. The way I'd do this is to
begin with List(1) and keep recursing till you get to the
row you want.
Worth noticing that you can optimize it: if c==0 or c==r the value is one, and to calculate let's say column 3 of the 100th row you still only need to calculate the first three elements of the previous rows.
A working tail recursive solution would be this:
def pascal(c: Int, r: Int): Int = {
#tailrec
def pascalAcc(c: Int, r: Int, acc: List[Int]): List[Int] = {
if (r == 0) acc
else pascalAcc(c, r - 1,
// from let's say 1 3 3 1 builds 0 1 3 3 1 0 , takes only the
// subset that matters (if asking for col c, no cols after c are
// used) and uses sliding to build (0 1) (1 3) (3 3) etc.
(0 +: acc :+ 0).take(c + 2)
.sliding(2, 1).map { x => x.reduce(_ + _) }.toList)
}
if (c == 0 || c == r) 1
else pascalAcc(c, r, List(1))(c)
}
The annotation #tailrec actually makes the compiler check the function
is actually tail recursive.
It could be probably be further optimized since given that the rows are symmetric, if c > r/2, pascal(c,r) == pascal ( r-c,r).. but left to the reader ;)

Implementing NPlusK patterns in Scala

I thought I could implement n+k patterns as an active pattern in scala via unapply, but it seems to fail with unspecified value parameter: k
object NPlusK {
def apply(n : Int, k : Int) = {
n + k
}
def unapply(n : Int, k : Int) = {
if (n > 0 && n > k) Some(n - k) else None
}
}
object Main {
def main(args: Array[String]): Unit = {
}
def fac(n: Int) : BigInt = {
n match {
case 0 => 1
case NPlusK(n, 1) => n * fac(n - 1)
}
}
}
Is it possible to implement n+k patterns in Scala and in that event how?
You should look at this question for a longer discussion, but here's a short adaptation for your specific case.
An unapply method can only take one argument, and must decide from that argument how to split it into two parts. Since there are multiple ways to divide some integer x into n and k such that x = n + k, you can't use an unapply for this.
You can get around it by creating a separate extractors for each k. Thus, instead of NplusK you'd have Nplus1, Nplus2, etc since there is exactly one way to get n from x such that x = n + 1.
case class NplusK(k: Int) {
def unapply(n: Int) = if (n > 0 && n > k) Some(n - k) else None
}
val Nplus1 = NplusK(1)
val Nplus1(n) = 5 // n = 4
So your match becomes:
n match {
case 0 => 1
case Nplus1(n) => n * fac(n - 1)
}
Deconstructor unapply does not work this way at all. It takes only one argument, the matched value, and returns an option on a tuple, with as many elements as there are arguments to the your pattern (NPlusK). That is, when you have
(n: Int) match {
...
case NPlusK(n, 1)
It will look for an unapply method with an Int (or supertype) argument. If there is such a method, and if the return type is a Tuple2 (as NPlusK appears with two arguments in the pattern), then it will try to match. Whatever subpattern there are inside NPlusK (here the variable n, and the constant 1), will not be passed to unapply in anyway (what do you expect if you write case NPlusK(NPlusK(1, x), NPlusK(1, y))?). Instead, if unapply returns some tuple, then each element of the tuple will be matched to the corresponding subpattern, here n which always matches, and 1 which will match if the value is equal to 1.
You could write
def unapply(n: Int) = if (n > 0) Some((n-1, 1)) else None.
That would match when your NPlusK(n, 1). But that would not match NPlusK(n, 2), nor NPlusK(1, n) (except if n is 2). This does not make much sense. A pattern should probably have only one possible match. NPlusK(x, y) can match n in many different ways.
What would work would be something Peano integers like, with Succ(n) matching n+1.