Structural Sharing List in Scala - scala

i have a question about Structural sharing from List in Scala. I have read some where in the Internet this sentence
List implements structural sharing of the tail list. This means that many operations are either zero- or constant-memory cost.
but i dont really understand how would time and memory cost of the operations for lists be reduced. For example
val mainList = List(3, 2, 1)
val with4 = 4 :: mainList // O(1)
if we want to create another list with4 time would just be O(1) and memory cost one instead but for operations of the list how would it be different ? I mean with length() or reverse() ... it would still be O(n) as normal ? Can anyone please explain me and if you can maybe an example would be really helpful. Thank you!

The operations on list that run in constant time (O(1)) due to structural sharing are prepend (::), head, and tail. Most other operations are linear time (O(n)).
You example is correct 4 :: myList is constant time, as are myList.head and mylist.tail, other things like length and reverse are linear time.
This is why List is not a particularly good collection to use in most cases unless you only use those operations, because everything else is O(n).
You can refer to http://docs.scala-lang.org/overviews/collections/performance-characteristics.html for an overview of the runtime characteristics of different collections.

Related

Is amortized time complexity analysis broken for immutable colections?

Update: this question is how can we use amortized analysis for immutable collections? Scala's immutable queue is just an example. How this immutable queue is implemented is clearly visible from sources. And as it was pointed in answers Scala's sources does not mention amortized time for it al all. But guides and internet podcasts do. And as I saw C# has the similar immutable queue with similar statements about amortized time for it.
The amortized analysis was invented originally for mutable collections. How we can apply it to a Scala's mutable Queue is clear. But how can we apply it to this sequence of operations for example?
val q0 = collection.immutable.Queue[Int]()
val q1 = q0.enqueue(1)
val h1 = q1.head
val q2 = q1.enqueue(2)
val h2 = q2.head
val (d2, q3) = q2.dequeue()
val (d1, q4) = q3.dequeue()
We have different immutable queues in sequence q0-q4. May we consider them as one single queue or not? How can we use O(1) enqueue operations to amortize both heavy head and the first dequeue? What method of amortized analysis can we use for this sequence? I don't know. I can not find anything in textbooks.
Final answer:
(Thanks to all who answered my question!)
In short: Yes, and no!
"No" because a data structure may be used as immutable but not persistent. The data structure is ephemeral if making a change we forget (or destroy) all old versions of the data structure. Mutable data structures is an example. Dequeuing of immutable queue with two strict lists can be called "amortized O(1)" in such ephemeral contexts. But full persistence with forking of the immutable data structure history is desirable for many algorithms. For such algorithms the expensive O(N) operations of the immutable queue with two strict lists are not amortized O(1) operations. So the guide authors should add an asterisk and print by 6pt font in footnote: * for specially selected sequences of operations.
In answers I was given an excellent reference: Amortization, Lazy Evaluation, and Persistence: Lists with Catenation via Lazy Linking:
We first review the basic concepts of lazy evaluation, amortization, and persistence. We next discuss why the traditional approach to amortization breaks down in a persistent setting. Then we outline our approach to amortization, which can often avoid such problems through judicious use of lazy evaluation.
We can create fully persistent immutable queue with amortized O(1) operations. It must be specially designed and use lazy evaluation. Without such framework with lazy evaluation of the structure parts and memorization of results we can not apply amortized analysis to fully persistent immutable data structures. (Also it is possible to create a double-ended queue with all operations having worst-case constant time and without lazy evaluation but my question was about amortized analysis).
Original question:
According to an original definition the amortized time complexity is a worst case averaged over sequence complexity for allowed sequences of operations: "we average the running time per operation over a (worst-case) sequence of operations" https://courses.cs.duke.edu/fall11/cps234/reading/Tarjan85_AmortizedComplexity.pdf See textbooks also ("Introduction To Algorithms" by Cormen et al. for example)
Scala's Collection Library guide states two collection.immutable.Queue methods (head and tail) have amortized O(1) time complexity: https://docs.scala-lang.org/overviews/collections-2.13/performance-characteristics.html This table does not mention complexities of enqueue and dequeue operations but another unofficial guide states O(1) time complexity for enqueue and amortized O(1) time complexity for dequeue. https://www.waitingforcode.com/scala-collections/collections-complexity-scala-immutable-collections/read
But what that statements for the amortized time complexity really mean? Intuitively they allow us to make predictions for algorithms with the data structure used such as any allowed by the data structure itself sequence of N amortized O(1) operations have no worse than O(N) complexity for the sequence. Unfortunately this intuitive meaning is clearly broken for immutable collections. For example, next function does have time complexity O(n^2) for 2n amortized O(1) (according to the guides) operations:
def quadraticInTime(n: Int) = {
var q = collection.immutable.Queue[Int]()
for (i <- 1 to n) q = q.enqueue(i)
List.fill(n)(q.head)
}
val tryIt = quadraticInTime(100000)
The second parameter of List.fill method is a by name parameter and is evaluated n times in sequence. We can also use q.dequeue._1 instead of q.head of course with the same result.
Also we can read in "Programming in Scala" by M. Odersky et al.: "Assuming that head, tail, and enqueue operations appear with about the same frequency, the amortized complexity is hence constant for each operation. So functional queues are asymptotically just as efficient as mutable ones." This contradicts to the worst case amortized complexity property from textbooks and wrong for quadraticInTime method above.
But if a data structure has O(1) time complexity of cloning we can break amortized time analysis assumptions for it just by executing N worst case "heavy" operations on N the data structure copies in sequence. And generality speaking any immutable collection have O(1) time complexity of cloning.
Question: is there a good formal definition of amortized time complexity for operations on immutable structures? The definition clearly must farther limit allowed sequences of operations to be useful.
Chris Okasaki described how to solve this problem with lazy evaluation in Amortization, Lazy Evaluation, and Persistence:
Lists with Catenation via Lazy Linking from 1995. The main idea is that you can guarantee that some action is done only once by hiding it in a thunk and letting the language runtime manage evaluating it exactly once.
The docs for Queue give tighter conditions for which the asymptotic complexity holds:
Adding items to the queue always has cost O(1). Removing items has cost O(1), except in the case where a pivot is required, in which case, a cost of O(n) is incurred, where n is the number of elements in the queue. When this happens, n remove operations with O(1) cost are guaranteed. Removing an item is on average O(1).
Note that removing from an immutable queue implies that when dequeueing, subsequent operations are on the returned Queue. Not doing this also means that it's not actually being used as a queue:
val q = Queue.empty[Int].enqueue(0).enqueue(1)
q.dequeue()._1 // 1
q.dequeue()._1 // 1
In your code, you're not actually using the Queue as a queue. Addressing this:
def f(n: Int): Unit = {
var q = Queue.empty[Int]
(1 to n).foreach { i => q = q.enqueue(i) }
List.fill(n) {
val (head, nextQ) = q.dequeue
q = nextQ
head
}
}
def time(block: => Unit): Unit = {
val startNanos = System.nanoTime()
block
println(s"Block took ${ System.nanoTime() - startNanos }ns")
}
scala> time(f(10000))
Block took 2483235ns
scala> time(f(20000))
Block took 5039420ns
Note also the if we do an approximately equal number of enqueue and head operations on the same scala.collection.immutable.Queue, the head operations are in fact constant time even without amortization:
val q = Queue.empty[Int]
(1 to n).foreach { i => q.enqueue(i) }
List.fill(n)(Try(q.head))

Efficient way to build collections from other collections

In Scala, as in many other languages, it is possible to build collections using the elements contained in other collections.
For example, it is possible to heapify a list:
import scala.collection.mutable.PriorityQueue
val l = List(1,2,3,4)
With:
val pq = PriorityQueue(l:_*)
or:
val pq = PriorityQueue[Int]() ++ l
These are, from my point of view, two quite different approaches:
Use a variadic constructor and collection:_* which, at the end of the day, dumps the collection in an intermediate array.
Build an empty target collection and use the ++ method to add all the source collection elements.
From an aesthetic point of view I do prefer the first option but I am worried about collection:_*. I understand form "Programming In Scala" that variadic functions are translated into functions receiving an array.
Is it, in general, the second option a better solution in terms of efficiency?
The second one might be faster in some cases, but apparently when the original collection is a Seq (such as your List), Scala tries to avoid the array creation; see here.
But, realistically, it probably will not ever make a difference anyway unless you are dealing with huge collections in tight loops. These kinds of things aren't worth worrying about, so do whichever one you like; you can spare the milliseconds.

Scala's toList function appears to be slow

I was under the impression that calling seq.toList() on an immutable Seq would be making a new list which is sharing the structural state from the first list. We're finding that this could be really slow and I'm not sure why. It is just sharing the structural state, correct? I can't see why it'd be making an n-time copy of all the elements when it knows they'll never change.
A List in Scala is a particular data structure: instances of :: each containing a value, followed by Nil at the end of the chain.
If you toList a List, it will take O(1) time. If you toList on anything else then it must be converted into a List, which involves O(n) object allocations (all the :: instances).
So you have to ask whether you really want a scala.collection.immutable.List. That's what toList gives you.
Sharing structural state is possible for particular operations on particular data structures.
With the List data structure in Scala, my understanding is that every element refers to the next, starting from the head through the tail, so a singly linked list.
From a structural state sharing perspective, consider the restrictions placed on this from the internal data structure perspective. Adding an element to the head of a List (X) effectively creates a new list (X') with the new element as the head of X' and the old list (X) as the tail. For this particular operation, internal state can be shared completely.
The same operation above can be applied to create a new List (X'), with the new element as the head of X' and any element from X as the tail, as long as you accept that the tail will be the element you choose from X, plus all additional elements it already has in it's data structure.
When you think about it logically, each data structure has an internal structure that allows some operations to be performed with simple shared internal structure and other operations requiring more invasive and costly computations.
The key from my perspective here is having an understanding of the constraints placed on the operations by the internal data structure itself.
For example, consider the same operations above on a doubly linked list data structure and you will see that there are quite different restrictions.
Personally, I find drawing out an understanding of the internal structure can be helpful in understanding the consequences of particular operations.
In the case of the toList operation on an arbitrary sequence, with no knowledge of the arbitrary sequences internal data structure, one has to therefore assume O(n). List.toList has the obvious performance advantage of already being a list.

Streams vs. tail recursion for iterative processes

This is a follow-up to my previous question.
I understand that we can use streams to generate an approximation of 'pi' (and other numbers), n-th fibonacci, etc. However I doubt if streams is the right approach to do that.
The main drawback (as I see it) is memory consumption: e.g. stream will retains all fibonacci numbers for i < n while I need only fibonacci n-th. Of course, I can use drop but it makes the solution a bit more complicated. The tail recursion looks like a more suitable approach to the tasks like that.
What do you think?
If need to go fast, travel light. That means; avoid allocation of any unneccessary memory. If you need memory, use the fastast collections available. If you know how much memory you need; preallocate. Allocation is the absolute performance killer... for calculation. Your code may not look nice anymore, but it will go fast.
However, if you're working with IO (disk, network) or any user interaction then allocation pales. It's then better to shift priority from code performance to maintainability.
Use Iterator. It does not retain intermediate values.
If you want n-th fibonacci number and use a stream just as a temporary data structure (if you do not hold references to previously computed elements of stream) then your algorithm would run in constant space.
Previously computed elements of a Stream (which are not used anymore) are going to be garbage collected. And as they were allocated in the youngest generation and immediately collected, allmost all allocations might be in cache.
Update:
It seems that the current implementation of Stream is not as space-efficient as it may be, mainly because it inherits an implementation of apply method from LinearSeqOptimized trait, where it is defined as
def apply(n: Int): A = {
val rest = drop(n)
if (n < 0 || rest.isEmpty) throw new IndexOutOfBoundsException("" + n)
rest.head
}
Reference to a head of a stream is hold here by this and prevents the stream from being gc'ed. So combination of drop and head methods (as in f.drop(100).head) may be better for situations where dropping intermediate results is feasible. (thanks to Sebastien Bocq for explaining this stuff on scala-user).

Asymptotic behaviour of Scala methods

Is there somewhere I can find out the expected time and space complexities of operations on collections like HashSet, TreeSet, List and so on?
Is one just expected to know these from the properties of the abstract-data-types themselves?
I know of Performance characteristics for Scala collections, but this only mentions some very basic operations. Perhaps the rest of the operations for these collections are built purely from a small base-set, but then, it seems I am just expected to know that they have implemented them in this way?
The guide for the other methods should be - just think what an efficient implementation should look like.
Most other bulk-operations on collections (operations that process each element in the collection) are O(n), so they are not mentioned there. Examples are filter, map, foreach, indexOf, reverse, find ...
Methods returning iterators or streams like combinations and permutations are usually O(1).
Methods involving 2 collections are usually O(max(n, m)) or O(min(n, m)). These are zip, zipAll, sameElements, corresponds, ...
Methods union, diff, and intersect are O(n + m).
Sort variants are, naturally, O(nlogn). The groupBy is O(nlogn) in the current implementation. The indexOfSlice uses the KMP algorithm and is O(m + n), where m and n are lengths of the strings.
Methods such as +:, :+ or patch are generally O(n) as well, unless you are dealing with a specific case of an immutable collection for which the operation in question is more efficient - for example, prepending an element on a functional List or appending an element to a Vector.
Methods toX are generally O(n), as they have to iterate all the elements and create a new collection. An exception is toStream which builds the collection lazily - so it's O(1). Also, whenever X is the type of the collection toX just returns this, being O(1).
Iterator implementations should have an O(1) (amortized) next and hasNext operations. Iterator creation should be worst-case O(logn), but O(1) in most cases.
Performance characteristics of the other methods is really difficult to assert. Consider the following:
These methods are all implemented based on foreach or iterator, and at usually very high levels in the hierachy. Vector's map is implemented on collection.TraversableLike, for example.
To add insult to injury, which method implementation is used depends on the linearization of the class inheritance. This also applies to any method called as a helper. It has happened before that changes here caused unforeseen performance problems.
Since foreach and iterator are both O(n), any improved performance depends on specialization at other methods, such as size and slice.
For many of them, there's further dependency on the performance characteristics of the builder that was provided, which depends on the call site instead of the definition site.
So the result is that the place where the method is defined -- and documented -- does not have near enough information to state its performance characteristics, and may depend not only on how other methods are implemented by the inheriting collection, but even by the performance characteristics of an object, Builder, obtained from CanBuildFrom, that is passed at the call site.
At best, any such documentation would be described in terms of other methods. Which doesn't mean it isn't worthwhile, but it isn't easily done -- and hard tasks on open source projects depend on volunteers, who usually work at what they like, not what is needed.