Can I mutate a variable in place in a purely functional way? - scala

I know I can use state passing and state monads for purely functional mutation, but afaik that's not in-place and I want the performance benefits of doing it in-place.
An example would be great, e.g. adding 1 to a number, preferably in Idris but Scala will also be good
p.s. is there a tag for mutation? can't see one

No, this is not possible in Scala.
It is however possible to achieve the performance benefits of in-place mutation in a purely functional language. For instance, let's take a function that updates an array in a purely functional way:
def update(arr: Array[Int], idx: Int, value: Int): Array[Int] =
arr.take(idx) ++ Array(value) ++ arr.drop(idx + 1)
We need to copy the array here in order to maintain purity. The reason is that if we mutated it in place, we'd be able to observe that after calling the function:
def update(arr: Array[Int], idx: Int, value: Int): Array[Int] = {
arr(idx) = value
arr
}
The following code will work fine with the first implementation but break with the second:
val arr = Array(1, 2, 3)
assert(arr(1) == 2)
val arr2 = update(arr, 1, 42)
assert(arr2(1) == 42) // so far, so good…
assert(arr(1) == 2) // oh noes!
The solution in a purely functional language is to simply forbid the last assert. If you can't observe the fact that the original array was mutated, then there's nothing wrong with updating the array in place! The means to achieve this is called linear types. Linear values are values that you can use exactly once. Once you've passed a linear value to a function, the compiler will not allow you to use it again, which fixes the problem.
There are two languages I know of that have this feature: ATS and Haskell. If you want more details, I'd recommend this talk by Simon Peyton-Jones where he explains the implementation in Haskell:
https://youtu.be/t0mhvd3-60Y
Support for linear types has since been merged into GHC: https://www.tweag.io/blog/2020-06-19-linear-types-merged/

Related

What's the difference between a set and a mapping to boolean?

In Scala, I sometimes use a Map[A, Boolean], sometimes a Set[A]. There's really not much difference between these two concepts, and an implementation might use the same data structure to implement them. So why bother to have Sets? As I said, this question occurred to me in connection with Scala, but it would arise in any programming language whose library implements a Set abstraction.
The are some specific convenient methods defined on Set (intersect, diff and more). Not a big deal, but often useful.
My first thoughts are two:
efficiency: if you only want to signal presence, why bothering with a flag that can either be true or false?
meaning: a set is about the existence of something, a map is about a correlation between a key and value (generally speaking); these two ideas are quite different and should be used accordingly to simplify reading and understanding the code
Furthermore, the semantics of application change:
val map: Map[String, Bool] = Map("hello" -> true, "world" -> false)
val set: Set[String] = Set("hello")
map("hello") // true
map("world") // false
map("none!") // EXCEPTION
set("hello") // true
set("world") // false
set("none!") // false
Without having to actually store an extra pair to indicate absence (not to mention the boolean that actually indicates such absence).
Sets are very good to indicate the presence of something, which makes them very good for filtering:
val map = (0 until 10).map(_.toString).zipWithIndex.toMap
val set = (3 to 5).map(_.toString).toSet
map.filterKeys(set) // keeps pairs 3rd through 5th
Maps, in terms of processing collections, are good to indicate relationships, which makes them very good for collecting:
set.collect(map) // more or less equivalent as above, but only values are returned
You can read more about using collections as functions to process other collections here.
There are several reasons:
1) It is easier to think/work with a data structure that only has single elements as opposed to mapping to dummy true,
For example, it is easier to convert a list to Set, then to Map:
scala> val myList = List(1,2,3,2,1)
myList: List[Int] = List(1, 2, 3, 2, 1)
scala> myList.toSet
res9: scala.collection.immutable.Set[Int] = Set(1, 2, 3)
scala> myList.map(x => (x, true)).toMap
res1: scala.collection.immutable.Map[Int,Boolean] = Map(1 -> true, 2 -> true, 3 -> true)
2) As Kombajn zbożowy pointed out, Sets have additional helper methods, union, intersect, diff, subsetOf.
3) Sense Set does not have mapping to dummy variable the size of a set in memory is smaller, this is more noticeable for small sized keys.
Having said the above, not all languages have Set data structure, Go for example does not.

Selection Sort - Functional Style with recursion

Have only recently started learning Scala and am trying to delve into Functional Programming. I have seen many of the posts on Selection Sort Functional style; but am not totally been able to understand all the solutions that have been given. My Scala skills are still Nascent.
I have written a piece of Scala code using tail recursion and would appreciate any feedback on the style. Does it look like Functional Programming? Is there a way to make this better or make it more functional?
import scala.annotation.tailrec
object FuncSelectionSort {
/**
* Selection Sort - Trying Functional Style
*/
def sort(a: Array[Int]) = {
val b: Array[Int] = new Array[Int](a.size)
Array.copy(a, 0, b, 0, a.size)
// Function to swap elements
def exchange(i: Int, j: Int): Unit = {
val k = b(i);
b(i) = b(j);
b(j) = k;
}
#tailrec
def helper(b: Array[Int], n: Int): Array[Int] = {
if (n == b.length-1) return b
else {
val head = b(n);
val minimumInTail = b.slice(n, b.length).min;
if (head > minimumInTail) {
val minimumInTailIndex = b.slice(n, b.length).indexOf(minimumInTail);
exchange(n, minimumInTailIndex + n);
}
helper(b, n + 1)
}
}
helper(b, 0)
}
}
The logic that I have tried to adopt is fairly simple. I start with the first index of the Array and find the minimum from the rest. But instead of passing the Array.tail for the next recursion; I pass in the full array and check a slice, where each slice is one smaller than the previous recursion slice.
For example,
If Array(10, 4, 6, 9, 3, 5)
First pass -> head = 10, slice = 4,6,9,3,5
First pass -> head = 4, slice = 6,9,3,5
I feel it looks the same as passing the tail, but I wanted to try and slice and see if it works the same way.
Appreciate your help.
For detailed feedback on working code, you should better go to codereview; however, I can say one thing: namely, in-place sorting arrays is per se not a good example of functional programming. This is because we purists don't like mutability, as it doesn't fit together well with recursion over data -- especially your mixing of recursion and mutation is not really good style, I'd say (and hard to read).
One clean variant would be to copy the full original array, and use in-place selection sort implemented as normal imperative code (with loops and in-place swap). Encapsulated in a function, this is pure to the outside. This pattern is commonly used in the standard library; cf. List.scala.
The other variant, and probably more instructive for learning immutable programming, is to use an immutable recursive algorithm over linked lists:
def sorted(a: List[Int]): List[Int] = a match {
case Nil => Nil
case xs => xs.min :: sorted(xs.diff(List(xs.min)))
}
From that style of programming, you'll learn much more about functional thinking (leaving aside efficiency though). Exercise: transform that code into tail-recursion.
(And actually, insertion sort works nicer with this pattern, since you don't have to "remove" at every step, but can build up a sorted linked list; you might try to implement that, too).

Why does Scala Immutable Vector not provide an insertAt method?

Scala Immutable Vector is implemented using a Relaxed Radix Balanced Trees, which provides single element append in log (n) complexity like an HAMT but also log (n) insertAt and concatenation.
Why does the API does not expose insertAt?
You can create a custom insertAt method (neglecting performance issues) operating on immutable vectors. Just the rough method sketch here
def insertAt[T]( v: Vector[T], elem: T, pos: Int) : Vector[T] = {
val n = v.size
val front = v.take(pos)
val end = v.takeRight(n-pos)
front ++ Vector(elem) ++ end
}
Call:
val x = Vector(1,2,3,5)
println( insertAt( x, 7, 0) )
println( insertAt( x, 7, 1) )
println( insertAt( x, 7, 2) )
Output:
Vector(7, 1, 2, 3, 5)
Vector(1, 7, 2, 3, 5)
Vector(1, 2, 7, 3, 5)
Not handled properly in this sketch
types.
index checking.
Use the pimp-my-library pattern to add that to the Vector class.
Edit: Updated version of insertAt
def insertAt[T]( v: Vector[T], elem: T, pos: Int) : Vector[T] =
v.take(pos) ++ Vector(elem) ++ v.drop(pos)
Having an efficient insertAt is typically not an operation I would expect from a general Vector, immutable or not. That's more the purview of (mutable) linked lists.
Putting an efficient insertAt into the public API of Vector would severely constrain the implementation choices for that API. While at the moment, there is only one implementation of the Scala standard library APIs (which I personally find rather unfortunate, a bit of competition wouldn't hurt, see C++, C, Java, Ruby, Python for examples of how multiple implementations can foster an environment of friendly coopetition), there is no way to know that this will forever be the case. So, you should be very careful what guarantees you add to the public API of the Scala standard library, otherwise you might constrain both future versions of the current single implementation as well as potential alternative implementations in undue ways.
Again, see Ruby for an example, where exposing implementation details of one implementation in the API has led to severe pains for other implementors.

How to remove duplicates from collection (without creating new ones in-between)?

So first up, I'm fully aware mutation is a bad idea, but I need to keep object creation down to a minimum as I have an incredibly huge amount of data to process (keeps GC hang time down and speeds up my code).
What I want is a scala collection that has a method like distinct or similar, or possibly a library or code snippet (but native scala preferred) such that the method is side effecting / mutating the collection rather than creating a new collection.
I've explored the usual suspects like ArrayBuffer, mutable.List, Array, MutableList, Vector and they all "create a new sequence" from the original rather than mutate the original in place. Am I trying to find something that does not exist? Will I just have to write my own??
I think this exists in C++ http://www.cplusplus.com/reference/algorithm/unique/
Also, mega mega bonus points if there is some kind of awesome tail recursive way of doing this so that any bookkeeping structures created are kept in a single stack frame that is thus deallocated from memory once the method exits. The reason this would be uber cool is then even if the method creates some instances of things in order to perform the removal of duplicates, those instance will not need to be garbage collected and therefore not contribute to massive GC hangs. It doesn't have to be recursion, as long as it's likely to cause the objects to go on the stack (see escape analysis here http://www.ibm.com/developerworks/java/library/j-jtp09275/index.html)
(Also if I can specify and fix the capacity (size in memory) of the collection that would also be great)
The algorithm (for C++), you mentioned is for consecutive duplicates. So if you need it for consecutive, you could use some LinkedList, but mutable lists was deprecated. On the other hand if you want something memory-efficient and agree with linear access - you could wrap your collection (mutable or immutable) with distinct iterator (O(N)):
def toConsDist[T](c: Traversable[T]) = new Iterator[T] {
val i = c.toIterator
var prev: Option[T] = None
var _nxt: Option[T] = None
def nxt = {
if (_nxt.isEmpty) _nxt = i.find(x => !prev.toList.contains(x))
prev = _nxt
_nxt
}
def hasNext = nxt.nonEmpty
def next = {
val next = nxt.get
_nxt = None
next
}
}
scala> toConsDist(List(1,1,1,2,2,3,3,3,2,2)).toList
res44: List[Int] = List(1, 2, 3, 2)
If you need to remove all duplicates, it will be О(N*N), but you can't use scala collections for that, because of https://github.com/scala/scala/commit/3cc99d7b4aa43b1b06cc837a55665896993235fc (see LinkedList part), https://stackoverflow.com/a/27645224/1809978.
But you may use Java's LinkedList:
import scala.collection.JavaConverters._
scala> val mlist = new java.util.LinkedList[Integer]
mlist: java.util.LinkedList[Integer] = []
scala> mlist.asScala ++= List(1,1,1,2,2,3,3,3,2,2)
res74: scala.collection.mutable.Buffer[Integer] = Buffer(1, 1, 1, 2, 2, 3, 3, 3, 2, 2)
scala> var i = 0
i: Int = 0
scala> for(x <- mlist.asScala){ if (mlist.indexOf(x) != i) mlist.set(i, null); i+=1} //O(N*N)
scala> while(mlist.remove(null)){} //O(N*N)
scala> mlist
res77: java.util.LinkedList[Integer] = [1, 2, 3]
mlist.asScala just creates wrapper without any copying. You can't modify Java's LinkedList during iteration, that's why i used null's. You may try Java ConcurrentLinkedQueue, but it doesn't support indexOf, so you will have to implement it by yourself (scala maps it to the Iterator, so asScala.indexOf won't work).
By definition, immutability forces you to create new objects whenever you want to change your collection.
What Scala provides for some collection are buffers which allow you to build a collection using a mutable interface and finally returning a immutable version but once you got your immutable collection you can't change its references in any way, that includes filtering as distinct. The furthest point you can reach concerning mutability in an immutable collection is changing its elements state when these are mutable objects.
On the other hand, some collections as Vector are implemented as trees (in this case as a trie) and insert or delete operations are implemented not by copying the entire tree but just the required branches.
From Martin Ordesky's Programming in Scala:
Updating an element in the middle of a vector can be done by copying
the node that contains the element, and every node that points to it,
starting from the root of the tree. This means that a functional
update creates between one and five nodes that each contain up to 32
elements or subtrees. This is certainly more expensive than an
in-place update in a mutable array, but still a lot cheaper than
copying the whole vector.

The easiest way to write {1, 2, 4, 8, 16 } in Scala

I was advertising Scala to a friend (who uses Java most of the time) and he asked me a challenge: what's the way to write an array {1, 2, 4, 8, 16} in Scala.
I don't know functional programming that well, but I really like Scala. However, this is a iterative array formed by (n*(n-1)), but how to keep track of the previous step? Is there a way to do it easily in Scala or do I have to write more than one line of code to achieve this?
Array.iterate(1, 5)(2 * _)
or
Array.iterate(1, 5)(n => 2 * n)
Elaborating on this as asked for in comment. Don't know what you want me to elaborate on, hope you will find what you need.
This is the function iterate(start,len)(f) on object Array (scaladoc). That would be a static in java.
The point is to fill an array of len elements, from first value start and always computing the next element by passing the previous one to function f.
A basic implementation would be
import scala.reflect.ClassTag
def iterate[A: ClassTag](start: A, len: Int)(f: A => A): Array[A] = {
val result = new Array[A](len)
if (len > 0) {
var current = start
result(0) = current
for (i <- 1 until len) {
current = f(current)
result(i) = current
}
}
result
}
(the actual implementation, not much different can be found here. It is a little different mostly because the same code is used for different data structures, e.g List.iterate)
Beside that, the implementation is very straightforward . The syntax may need some explanations :
def iterate[A](...) : Array[A] makes it a generic methods, usable for any type A. That would be public <A> A[] iterate(...) in java.
ClassTag is just a technicality, in scala as in java, you normally cannot create an array of a generic type (java new E[]), and the : ClassTag asks the compiler to add some magic which is very similar to adding at method declaration, and passing at call site, a class<A> clazz parameter in java, which can then be used to create the array by reflection. If you do e.g List.iterate rather than Array.iterate, it is not needed.
Maybe more surprising, the two parameters lists, one with start and len, and then in a separate parentheses, the one with f. Scala allows a method to have severals parameters lists. Here the reason is the peculiar way scala does type inference : Looking at the first parameter list, it will determine what is A, based on the type of start. Only afterwards, it will look at the second list, and then it knows what type A is. Otherwise, it would need to be told, so if there had been only one parameter list, def iterate[A: ClassTag](start: A, len: Int, f: A => A),
then the call should be either
Array.iterate(1, 5, n : Int => 2 * n)
Array.iterate[Int](1, 5, n => 2 * n)
Array.iterate(1, 5, 2 * (_: int))
Array.iterate[Int](1, 5, 2 * _)
making Int explicit one way or another. So it is common in scala to put function arguments in a separate argument list. The type might be much longer to write than just 'Int'.
A => A is just syntactic sugar for type Function1[A,A]. Obviously a functional language has functions as (first class) values, and a typed functional language has types for functions.
In the call, iterate(1, 5)(n => 2 * n), n => 2 * n is the value of the function. A more complete declaration would be {n: Int => 2 * n}, but one may dispense with Int for the reason stated above. Scala syntax is rather flexible, one may also dispense with either the parentheses or the brackets. So it could be iterate(1, 5){n => 2 * n}. The curlies allow a full block with several instruction, not needed here.
As for immutability, Array is basically mutable, there is no way to put a value in an array except to change the array at some point. My implementation (and the one in the library) also use a mutable var (current) and a side-effecting for, which is not strictly necessary, a (tail-)recursive implementation would be only a little longer to write, and just as efficient. But a mutable local does not hurt much, and we are already dealing with a mutable array anyway.
always more than one way to do it in Scala:
scala> (0 until 5).map(1<<_).toArray
res48: Array[Int] = Array(1, 2, 4, 8, 16)
or
scala> (for (i <- 0 to 4) yield 1<<i).toArray
res49: Array[Int] = Array(1, 2, 4, 8, 16)
or even
scala> List.fill(4)(1).scanLeft(1)(2*_+0*_).toArray
res61: Array[Int] = Array(1, 2, 4, 8, 16)
The other answers are fine if you happen to know in advance how many entries will be in the resulting list. But if you want to take all of the entries up to some limit, you should create an Iterator, use takeWhile to get the prefix you want, and create an array from that, like so:
scala> Iterator.iterate(1)(2*_).takeWhile(_<=16).toArray
res21: Array[Int] = Array(1, 2, 4, 8, 16)
It all boils down to whether what you really want is more correctly stated as
the first 5 powers of 2 starting at 1, or
the powers of 2 from 1 to 16
For non-trivial functions you almost always want to specify the end condition and let the program figure out how many entries there are. Of course your example was simple, and in fact the real easiest way to create that simple array is just to write it out literally:
scala> Array(1,2,4,8,16)
res22: Array[Int] = Array(1, 2, 4, 8, 16)
But presumably you were asking for a general technique you could use for arbitrarily complex problems. For that, Iterator and takeWhile are generally the tools you need.
You don't have to keep track of the previous step. Also, each element is not formed by n * (n - 1). You probably meant f(n) = f(n - 1) * 2.
Anyway, to answer your question, here's how you do it:
(0 until 5).map(math.pow(2, _).toInt).toArray