I'm reading "Programming in Scala 2ed". In section 24.4, it's noted that Iterable contains many method that cannot be efficiently written without an iterator. Table 24.2 contains these methods. However, I don't understand why some of them cannot be efficiently implemented on iterator. For example, consider zipWithIndex.
def zipWithIndex[A1 >: A, That](implicit bf: CanBuildFrom[Repr, (A1, Int), That]): That = {
val b = bf(repr)
var i = 0
for (x <- this) {
b += ((x, i))
i +=1
}
b.result
}
Why not move this definition to traversable? It seems to me that the code could be exactly the same and there would be no difference in efficienty.
You're completely correct, and your implementation should work. No good reason to have zipWithIndex defined in Iterable and not Traversable; neither makes any guarantee about the ordering of the elements under traversal.
(This is my first answer on StackOverflow. Hope I've been helpful. :) If I've not, please tell me.)
Traversable does not guarantee the order in which the elements will be visited and only requires you to define a foreach method with the following signature:
def foreach[U](f: Elem => U): Unit
Since this method just needs to call f for each element in any order, it doesn't make sense to have an index on elements since the order could be different for each invocation of foreach.
Edit: This is really just an explanation, why it's not on Traversable. As Luigi pointed out in the comments, zipWithIndex would make more sense on Seq.
Related
I was recently reading Category Theory for Programmers and in one of the challenges, Bartosz proposed to write a function called memoize which takes a function as an argument and returns the same one with the difference that, the first time this new function is called, it stores the result of the argument and then returns this result each time it is called again.
def memoize[A, B](f: A => B): A => B = ???
The problem is, I can't think of any way to implement this function without resorting to mutability. Moreover, the implementations I have seen uses mutable data structures to accomplish the task.
My question is, is there a purely functional way of accomplishing this? Maybe without mutability or by using some functional trick?
Thanks for reading my question and for any future help. Have a nice day!
is there a purely functional way of accomplishing this?
No. Not in the narrowest sense of pure functions and using the given signature.
TLDR: Use mutable collections, it's okay!
Impurity of g
val g = memoize(f)
// state 1
g(a)
// state 2
What would you expect to happen for the call g(a)?
If g(a) memoizes the result, an (internal) state has to change, so the state is different after the call g(a) than before.
As this could be observed from the outside, the call to g has side effects, which makes your program impure.
From the Book you referenced, 2.5 Pure and Dirty Functions:
[...] functions that
always produce the same result given the same input and
have no side effects
are called pure functions.
Is this really a side effect?
Normally, at least in Scala, internal state changes are not considered side effects.
See the definition in the Scala Book
A pure function is a function that depends only on its declared inputs and its internal algorithm to produce its output. It does not read any other values from “the outside world” — the world outside of the function’s scope — and it does not modify any values in the outside world.
The following examples of lazy computations both change their internal states, but are normally still considered purely functional as they always yield the same result and have no side effects apart from internal state:
lazy val x = 1
// state 1: x is not computed
x
// state 2: x is 1
val ll = LazyList.continually(0)
// state 1: ll = LazyList(<not computed>)
ll(0)
// state 2: ll = LazyList(0, <not computed>)
In your case, the equivalent would be something using a private, mutable Map (as the implementations you may have found) like:
def memoize[A, B](f: A => B): A => B = {
val cache = mutable.Map.empty[A, B]
(a: A) => cache.getOrElseUpdate(a, f(a))
}
Note that the cache is not public.
So, for a pure function f and without looking at memory consumption, timings, reflection or other evil stuff, you won't be able to tell from the outside whether f was called twice or g cached the result of f.
In this sense, side effects are only things like printing output, writing to public variables, files etc.
Thus, this implementation is considered pure (at least in Scala).
Avoiding mutable collections
If you really want to avoid var and mutable collections, you need to change the signature of your memoize method.
This is, because if g cannot change internal state, it won't be able to memoize anything new after it was initialized.
An (inefficient but simple) example would be
def memoizeOneValue[A, B](f: A => B)(a: A): (B, A => B) = {
val b = f(a)
val g = (v: A) => if (v == a) b else f(v)
(b, g)
}
val (b1, g) = memoizeOneValue(f, a1)
val (b2, h) = memoizeOneValue(g, a2)
// ...
The result of f(a1) would be cached in g, but nothing else. Then, you could chain this and always get a new function.
If you are interested in a faster version of that, see #esse's answer, which does the same, but more efficient (using an immutable map, so O(log(n)) instead of the linked list of functions above, O(n)).
Let's try(Note: I have change the return type of memoize to store the cached data):
import scala.language.existentials
type M[A, B] = A => T forSome { type T <: (B, A => T) }
def memoize[A, B](f: A => B): M[A, B] = {
import scala.collection.immutable
def withCache(cache: immutable.Map[A, B]): M[A, B] = a => cache.get(a) match {
case Some(b) => (b, withCache(cache))
case None =>
val b = f(a)
(b, withCache(cache + (a -> b)))
}
withCache(immutable.Map.empty)
}
def f(i: Int): Int = { print(s"Invoke f($i)"); i }
val (i0, m0) = memoize(f)(1) // f only invoked at first time
val (i1, m1) = m0(1)
val (i2, m2) = m1(1)
Yes there is pure functional ways to implement polymorphic function memoization. The topic is surprisingly deep and even summons the Yoneda Lemma, which is likely what Bartosz had in mind with this exercise.
The blog post Memoization in Haskell gives a nice introduction by simplifying the problem a bit: instead of looking at arbitrary functions it restricts the problem to functions from the integers.
The following memoize function takes a function of type Int -> a and
returns a memoized version of the same function. The trick is to turn
a function into a value because, in Haskell, functions are not
memoized but values are. memoize converts a function f :: Int -> a
into an infinite list [a] whose nth element contains the value of f n.
Thus each element of the list is evaluated when it is first accessed
and cached automatically by the Haskell runtime thanks to lazy
evaluation.
memoize :: (Int -> a) -> (Int -> a)
memoize f = (map f [0 ..] !!)
Apparently the approach can be generalised to function of arbitrary domains. The trick is to come up with a way to use the type of the domain as an index into a lazy data structure used for "storing" previous values. And this is where the Yoneda Lemma comes in and my own understanding of the topic becomes flimsy.
Alright, Scala has me feeling pretty dense. I'm finding the docs pretty impenetrable -- and worse, you can't Google the term "Scala ++:" because Google drops the operator terms!
I was reading some code and saw this line:
Seq(file) ++: children.flatMap(walkTree(_))
But couldn't figure it out. The docs for Seq show three things:
++
++:
++:
Where the latter two are over loaded to do.. something. The actual explanation in the doc says that they do the same thing as ++. Namely, add one list to another.
So, what exactly is the difference between the operators..?
++ and ++: return different results when the operands are different types of collection. ++ returns the same collection type as the left side, and ++: returns the same collection type as the right side:
scala> List(5) ++ Vector(5)
res2: List[Int] = List(5, 5)
scala> List(5) ++: Vector(5)
res3: scala.collection.immutable.Vector[Int] = Vector(5, 5)
There are two overloaded versions of ++: solely for implementation reasons. ++: needs to be able to take any TraversableOnce, but an overloaded version is provided for Traversable (a subtype of TraversableOnce) for efficiency.
Just to make sure:
A colon (:) in the end of a method name makes the call upside-down.
Let's make two methods and see what's gonna happen:
object Test {
def ~(i: Int) = null
def ~:(i: Int) = null //putting ":" in the tail!
this ~ 1 //compiled
1 ~: this //compiled
this.~(1) //compiled
this.~:(1) //compiled.. lol
this ~: 1 //error
1 ~ this //error
}
So, in seq1 ++: seq2, ++: is actually the seq2's method.
edited: As #okiharaherbst mentions, this is called as right associativity.
Scala function naming will look cryptic unless you learn a few simple rules and their precedence.
In this case, a colon means that the function has right associativity as opposed to the more usual left associativity that you see in imperative languages.
So ++: as in List(10) ++: Vector(10) is not an operator on the list but a function called on the vector even if it appears on its left hand-side, i.e., it is the same as Vector(10).++:(List(10)) and returns a vector.
++ as in List(10) ++ Vector(10) is now function called on the list (left associativity), i.e., it is the same as List(10).++(Vector(10)) and returns a list.
what exactly is the difference between the operators..?
The kind of list a Seq.++ operates on.
def ++[B](that: GenTraversableOnce[B]): Seq[B]
def ++:[B >: A, That](that: Traversable[B])(implicit bf: CanBuildFrom[Seq[A], B, That]): That
def ++:[B](that: TraversableOnce[B]): Seq[B]
As commented in "What is the basic collection type in Scala?"
Traversable extends TraversableOnce (which unites Iterator and Traversable), and
TraversableOnce extends GenTraversableOnce (which units the sequential collections and the parallel.)
I ran a right folding (:\) on a List of String, which caused a stack overflow. But When I changed it to use a left folding (/:), it worked fine. Why?
Since the source is open, you can actually see the implementations in LinearSeqOptimized.scala:
override /*TraversableLike*/
def foldLeft[B](z: B)(f: (B, A) => B): B = {
var acc = z
var these = this
while (!these.isEmpty) {
acc = f(acc, these.head)
these = these.tail
}
acc
}
override /*IterableLike*/
def foldRight[B](z: B)(f: (A, B) => B): B =
if (this.isEmpty) z
else f(head, tail.foldRight(z)(f))
What you will notice is that foldLeft uses a while-loop while foldRight uses recursion. The loop strategy is very efficient but recursion requires pushing a frame on the stack for every element in the list (as it traverses to the end). This is why foldLeft works fine but foldRight causes a stack overflow.
Fold are a general set of commonly used functions which traverse recursive data structures and typically result in a single value (reference). On sequences and lists, FoldLeft (in a general sense) is tail-recursive and as such, it can be optimized. FoldRight is not tail-recursive and thus can not be tail-call optimized. It does potentially have the benefit of being able to be applied to infinite series however.
The implementation of foldLeft and foldRight from the scala libraries (pirated from #dhg's answer) reflect this optimization/lack-there-of. foldLeft has been manually tail-call optimized using a while loop. foldRight can not be.
override /*TraversableLike*/
def foldLeft[B](z: B)(f: (B, A) => B): B = {
var acc = z
var these = this
while (!these.isEmpty) {
acc = f(acc, these.head)
these = these.tail
}
acc
}
override /*IterableLike*/
def foldRight[B](z: B)(f: (A, B) => B): B =
if (this.isEmpty) z
else f(head, tail.foldRight(z)(f))
I believe there is a section in Programming in Scala, Second Edition by Odersky, Spoon, Venners on folds which describes how foldLeft on Lists is tail-recursive while it may be possible to foldRight on infinite lists. Unfortunately, I do not have it on me at the moment in order to provide page numbers, etc. If not, it isn't very difficult to prove this.
See also the section of folds in Learn You a Haskell for Great Good by Miran Lipovača
Back when we were dealing with recursion, we noticed a theme
throughout many of the recursive functions that operated on lists.
Usually, we'd have an edge case for the empty list. We'd introduce the
x:xs pattern and then we'd do some action that involves a single
element and the rest of the list. It turns out this is a very common
pattern, so a couple of very useful functions were introduced to
encapsulate it. These functions are called folds.
I have learned the basic difference between foldLeft and reduceLeft
foldLeft:
initial value has to be passed
reduceLeft:
takes first element of the collection as initial value
throws exception if collection is empty
Is there any other difference ?
Any specific reason to have two methods with similar functionality?
Few things to mention here, before giving the actual answer:
Your question doesn't have anything to do with left, it's rather about the difference between reducing and folding
The difference is not the implementation at all, just look at the signatures.
The question doesn't have anything to do with Scala in particular, it's rather about the two concepts of functional programming.
Back to your question:
Here is the signature of foldLeft (could also have been foldRight for the point I'm going to make):
def foldLeft [B] (z: B)(f: (B, A) => B): B
And here is the signature of reduceLeft (again the direction doesn't matter here)
def reduceLeft [B >: A] (f: (B, A) => B): B
These two look very similar and thus caused the confusion. reduceLeft is a special case of foldLeft (which by the way means that you sometimes can express the same thing by using either of them).
When you call reduceLeft say on a List[Int] it will literally reduce the whole list of integers into a single value, which is going to be of type Int (or a supertype of Int, hence [B >: A]).
When you call foldLeft say on a List[Int] it will fold the whole list (imagine rolling a piece of paper) into a single value, but this value doesn't have to be even related to Int (hence [B]).
Here is an example:
def listWithSum(numbers: List[Int]) = numbers.foldLeft((List.empty[Int], 0)) {
(resultingTuple, currentInteger) =>
(currentInteger :: resultingTuple._1, currentInteger + resultingTuple._2)
}
This method takes a List[Int] and returns a Tuple2[List[Int], Int] or (List[Int], Int). It calculates the sum and returns a tuple with a list of integers and it's sum. By the way the list is returned backwards, because we used foldLeft instead of foldRight.
Watch One Fold to rule them all for a more in depth explanation.
reduceLeft is just a convenience method. It is equivalent to
list.tail.foldLeft(list.head)(_)
foldLeft is more generic, you can use it to produce something completely different than what you originally put in. Whereas reduceLeft can only produce an end result of the same type or super type of the collection type. For example:
List(1,3,5).foldLeft(0) { _ + _ }
List(1,3,5).foldLeft(List[String]()) { (a, b) => b.toString :: a }
The foldLeft will apply the closure with the last folded result (first time using initial value) and the next value.
reduceLeft on the other hand will first combine two values from the list and apply those to the closure. Next it will combine the rest of the values with the cumulative result. See:
List(1,3,5).reduceLeft { (a, b) => println("a " + a + ", b " + b); a + b }
If the list is empty foldLeft can present the initial value as a legal result. reduceLeft on the other hand does not have a legal value if it can't find at least one value in the list.
For reference, reduceLeft will error if applied to an empty container with the following error.
java.lang.UnsupportedOperationException: empty.reduceLeft
Reworking the code to use
myList foldLeft(List[String]()) {(a,b) => a+b}
is one potential option. Another is to use the reduceLeftOption variant which returns an Option wrapped result.
myList reduceLeftOption {(a,b) => a+b} match {
case None => // handle no result as necessary
case Some(v) => println(v)
}
The basic reason they are both in Scala standard library is probably because they are both in Haskell standard library (called foldl and foldl1). If reduceLeft wasn't, it would quite often be defined as a convenience method in different projects.
From Functional Programming Principles in Scala (Martin Odersky):
The function reduceLeft is defined in terms of a more general function, foldLeft.
foldLeft is like reduceLeft but takes an accumulator z, as an additional parameter, which is returned when foldLeft is called on an empty list:
(List (x1, ..., xn) foldLeft z)(op) = (...(z op x1) op ...) op x
[as opposed to reduceLeft, which throws an exception when called on an empty list.]
The course (see lecture 5.5) provides abstract definitions of these functions, which illustrates their differences, although they are very similar in their use of pattern matching and recursion.
abstract class List[T] { ...
def reduceLeft(op: (T,T)=>T) : T = this match{
case Nil => throw new Error("Nil.reduceLeft")
case x :: xs => (xs foldLeft x)(op)
}
def foldLeft[U](z: U)(op: (U,T)=>U): U = this match{
case Nil => z
case x :: xs => (xs foldLeft op(z, x))(op)
}
}
Note that foldLeft returns a value of type U, which is not necessarily the same type as List[T], but reduceLeft returns a value of the same type as the list).
To really understand what are you doing with fold/reduce,
check this: http://wiki.tcl.tk/17983
very good explanation. once you get the concept of fold,
reduce will come together with the answer above:
list.tail.foldLeft(list.head)(_)
Scala 2.13.3, Demo:
val names = List("Foo", "Bar")
println("ReduceLeft: "+ names.reduceLeft(_+_))
println("ReduceRight: "+ names.reduceRight(_+_))
println("Fold: "+ names.fold("Other")(_+_))
println("FoldLeft: "+ names.foldLeft("Other")(_+_))
println("FoldRight: "+ names.foldRight("Other")(_+_))
outputs:
ReduceLeft: FooBar
ReduceRight: FooBar
Fold: OtherFooBar
FoldLeft: OtherFooBar
FoldRight: FooBarOther
I have two functions (not these have been edited since the original -- some of the answers below are responding to the original ones which returned a sequence of ()):
def foo1[A](ls: Iterable[A]) : Iterator[A] =
for (List(a, b) <- ls sliding 2) yield a
def foo2[A](ls: Iterable[A]) : Iterator[A] =
for (a::b::Nil <- ls sliding 2) yield a
which I naively thought were the same. But Scala gives this waning only for the first one:
warning: non variable type-argument A in type pattern List[A]
is unchecked since it is eliminated by erasure
I think I understand why it gives that error for the first one: Scala thinks that I'm trying to use the type as a condition on the pattern, ie a match against List[B](_, _) should fail if B does not inherit from A, except that this can't happen because the type is erased in both cases.
So two questions:
1) Why does the second one not give the same warning?
2) Is it possible to convince Scala that the type is actually known at compile time, and thus can't possibly fail to match?
edit: I think this answers my first question. But I'm still curious about the second one.
edit: agilesteel mentioned in a comment that
for (List(a, b) <- List(1,2,3,4) sliding 2) yield ()
produces no warning. How is that different from foo1 (shouldn't the [Int] parameter be erased just the same as the [A] parameter is)?
I'm not sure what is happening here, but the static type of Iterable[A].sliding is Iterator[Iterable[A]], not Iterator[List[A]] which would be the static type of List[A].sliding.
You can try receiving Seq instead of Iterable, and that work too. EDIT Contrary to what I previously claimed, both Iterable and Seq are co-variant, so I don't know what's different. END EDIT The definition of sliding is pretty weird too:
def sliding [B >: A] (size: Int): Iterator[Iterable[A]]
See how it requires a B, superclass of A, that never gets used? Contrast that with an Iterator.sliding, for which there's no problem:
def sliding [B >: A] (size: Int, step: Int = 1): GroupedIterator[B]
Anyway, on to the second case:
for (a::b::Nil <- ls sliding 2) yield a
Here you are decomposing the list twice, and for each decomposition the type of head is checked against A. Since the type of head is not erased, you don't have a problem. This is also mostly a guess.
Finally, if you turn ls into a List, you won't have a problem. Short of that, I don't think there's anything you can do. Otherwise, you can also write this:
def foo1[A](ls: Iterable[A]) : Iterator[A] =
for (Seq(a, b) <- ls.iterator sliding 2) yield a
1) The second one does not produce a warning probably because you are constructing the list (or the pattern) by prepending elements to the Nil object, which extends List parameterising it with Nothing. And since everything is Nothing, there is nothing to be worried about ;) But I'm not sure, really guessing here.
2) Why don't you just use:
def foo[A](ls: Iterable[A]) =
for (list <- ls sliding 2) yield ()