scala: data structure for multiple-root tree or forest - scala

I am in search of a good data structure for my requirement:
Support multiple roots.
Should have links to parent and children.
Assuming each Node can be uniquely identified, I have come up with a following data structure which seems a bit immature.
case class Node[T](data: T, children: List[Node[T]], parents: List[Node[T]])
class Forest[T](val roots: List[Node[T]]) {
// other helper methods to create the Forest
}
object Forest {
def apply[T](list: List[T]): Forest[T] = {
val roots = for (l <- list) yield Node[T](l, List.empty[Node[T]], List.empty[Node[T]])
new Forest[T](roots)
}
}
Does anyone have better suggestions?
TIA

I suggest you check out Graph for Scala. The library isn't being very actively developed, but the basic functionality doesn't need it and once you get used to the way it traverses nodes, you get a lot of functionality without much effort. (It might take a little thought to best fit the parent/child relationship into the graph; you could make a directed graph to represent that, or two graphs one in each direction, etc.)
Otherwise, the basics of what you have now is missing one detail--how do you actually create the forest? Right now, a node is immutable with immutable lists of parents and children, which means that the parents and children of every node must be created before the node is, which means...uh-oh.
You can solve this by adding an extra layer of indirection (have lists of node IDs and a map associating IDs with actual nodes), or by adding mutability somewhere (generally a little iffy in case classes, but in this case maybe it's what you need).
And then, of course, you need a bunch of methods to actually do something useful with the forest and provide higher-level ways of constructing it.

Related

How to append to Rose Trees in Scala

I'm in need of a Rose Tree data structure like scalaz.Tree or the following:
case class Tree[A](root: A, children: Stream[Tree[A]])
However I'm having a hard time understanding how to write a function for appending nodes. In general, I understand that appending a node involves rebuilding the tree, and doing that with immutable data structures requires recursive functions, but I just haven't been able to put it all together. I did see Scala: Tree Insert Tail Recursion With Complex Structure but since that involves binary trees, I didn't quite grasp how to implement it for a multi-way tree.
Traditionally, I would implement this mutably using Array or such. Is there some book or resource that I should read to understand functional data structures more? Or is there some example code that could be recommended for me to read over?
It isn't obvious what are your requirements for "appending nodes". You can do it in the trivial way, inserting the second node as a first child:
def append[A](tree1: Tree[A], tree2: Tree[A]) = tree1 match {
case Tree(root, children) => Tree(root, tree2 #:: children)
}
If that's not what you want, can you provide an example?
Is there some book or resource that I should read to understand functional data structures more? Or is there some example code that could be recommended for me to read over?
The standard recommendation is Structure and Interpretation of Computer Programs. The code examples aren't in Scala, but it should be easy enough to translate the knowledge.

Why are SessionVars in Lift implemented using singletons?

One typical way of managing state in Lift is to create a singleton object extending SessionVar, like in this example taken from the documentation:
object MySnippetCompanion {
object mySessionVar extends SessionVar[String]("hello")
}
The case for using SessionVars is clear and I've been using them in practice as needed. I also roughly understand how they work inside.
Still, I can't help but wonder why the mechanism for "session variables", which are clearly associated with the current session (usually just one out of many sessions in the system), was designed to be used via a singleton? This goes so against my intuition that at first glance I was tempted to believe that Lift was somehow able to override Scala's language features and to make object mean something different that in regular Scala.
Even though I now understand how it works, I can't grasp the rationale for such a design, which, at least for me, breaks the rule of least astonishment. Can someone point out any advantages or perhaps explain why such a design decision could have been made?
Session variables in Lift use Scala's DynamicVariable. Basically they allow you to statically reference a variable in a code-block and then later on call the code and substitute a value:
import scala.util.DynamicVariable
val x = new DynamicVariable(1)
def printIt() {
println(x.value)
}
printIt()
//> 1
x.withValue(2)(printIt())
//> 2
So each time a request is handled, the scope of these dynamic variables is changed to the current session, completely hiding the state change of the current session to you as a programmer.
The other option would be to pass around a "sessionID" object which you would have to use when you want to access session specific data. Not really handy.
The reason you have to use the object keyword is that object is unique in that it defines both a value and a class. This allows Lift to call getClass to get a name that uniquely identifies this SessionVar vs. any other one, which Lift needs in order to serialize and deserialize every piece of session state in the right place(s). Furthermore if the SessionVar is in a class that has two instances (for instance a snippet rendered in two tabs), they will both refer to the same piece of session state. (The flip side of the coin is that the same SessionVar instance can be referenced by two different sessions and mean the right thing to each.)
Actually at times this is insufficient --- for instance, if you define a SessionVar in a trait, and have two different classes that inherit the trait, but you need them two have two different values. The solution in that case is to override the def for the "name salt", which is combined with getClass to identify the SessionVar.

Should Scala immutable case classes be defined to hold Seq[T], immutable.Seq[T], List[T] or Vector[T]?

If we want to define a case class that holds a single object, say a tuple, we can do it easily:
sealed case class A(x: (Int, Int))
In this case, retrieving the "x" value will take a small constant amount of time, and this class will only take a small constant amount of space, regardless of how it was created.
Now, let's assume we want to hold a sequence of values instead; we could it like this:
sealed final case class A(x: Seq[Int])
This might seem to work as before, except that now storage and time to read all of x is proportional to x.length.
However, this is not actually the case, because someone could do something like this:
val hugeList = (1 to 1000000000).toList
val a = A(hugeList.view.filter(_ == 500000000))
In this case, the a object looks like an innocent case class holding a single int in a sequence, but in fact it requires gigabytes of memory, and it will take on the order of seconds to access that single element every time.
This could be fixed by specifying something like List[T] as the type instead of Seq[T]; however, this seems ugly since it adds a reference to a specific implementation, while in fact other well behaved implementations, like Vector[T], would also do.
Another worrying issue is that one could pass a mutable Seq[T], so it seems that one should at least use immutable.Seq instead of scala.collection.Seq (although the compiler can't actually enforce the immutability at the moment).
Looking at most libraries it seems that the common pattern is to use scala.collection.Seq[T], but is this really a good idea?
Or perhaps Seq is being used just because it's the shortest to type, and in fact it would be best to use immutable.Seq[T], List[T], Vector[T] or something else?
New text added in edit
Looking at the class library, some of the most core functionality like scala.reflect.api.Trees does in fact use List[T], and in general using a concrete class seems a good idea.
But then, why use List and not Vector?
Vector has O(1)/O(log(n)) length, prepend, append and random access, is asymptotically smaller (List is ~3-4 times bigger due to vtable and next pointers), and supports cache efficient and parallelized computation, while List has none of those properties except O(1) prepend.
So, personally I'm leaning towards Vector[T] being the correct choice for something exposed in a library data structure, where one doesn't know what operations the library user will need, despite the fact that it seems less popular.
First of all, you talk both about space and time requirements. In terms of space, your object will always be as large as the collection. It doesn't matter whether you wrap a mutable or immutable collection, that collection for obvious reasons needs to be in memory, and the case class wrapping it doesn't take any additional space (except its own small object reference). So if your collection takes "gigabytes of memory", that's a problem of your collection, not whether you wrap it in a case class or not.
You then go on to argue that a problem arises when using views instead of eager collections. But again the question is what the problem actually is? You use the example of lazily filtering a collection. In general running a filter will be an O(n) operation just as if you were iterating over the original list. In that example it would be O(1) for successive calls if that collection was made strict. But that's a problem of the calling site of your case class, not the definition of your case class.
The only valid point I see is with respect to mutable collections. Given the defining semantics of case classes, you should really only use effectively immutable objects as arguments, so either pure immutable collections or collections to which no instance has any more write access.
There is a design error in Scala in that scala.Seq is not aliased to collection.immutable.Seq but a general seq which can be either mutable or immutable. I advise against any use of unqualified Seq. It is really wrong and should be rectified in the Scala standard library. Use collection.immutable.Seq instead, or if the collection doesn't need to be ordered, collection.immutable.Traversable.
So I agree with your suspicion:
Looking at most libraries it seems that the common pattern is to use scala.collection.Seq[T], but is this really a good idea?
No! Not good. It might be convenient, because you can pass in an Array for example without explicit conversion, but I think a cleaner design is to require immutability.

Scala immutable map, when to go mutable?

My present use case is pretty trivial, either mutable or immutable Map will do the trick.
Have a method that takes an immutable Map, which then calls a 3rd party API method that takes an immutable Map as well
def doFoo(foo: String = "default", params: Map[String, Any] = Map()) {
val newMap =
if(someCondition) params + ("foo" -> foo) else params
api.doSomething(newMap)
}
The Map in question will generally be quite small, at most there might be an embedded List of case class instances, a few thousand entries max. So, again, assume little impact in going immutable in this case (i.e. having essentially 2 instances of the Map via the newMap val copy).
Still, it nags me a bit, copying the map just to get a new map with a few k->v entries tacked onto it.
I could go mutable and params.put("bar", bar), etc. for the entries I want to tack on, and then params.toMap to convert to immutable for the api call, that is an option. but then I have to import and pass around mutable maps, which is a bit of hassle compared to going with Scala's default immutable Map.
So, what are the general guidelines for when it is justified/good practice to use mutable Map over immutable Maps?
Thanks
EDIT
so, it appears that an add operation on an immutable map takes near constant time, confirming #dhg's and #Nicolas's assertion that a full copy is not made, which solves the problem for the concrete case presented.
Depending on the immutable Map implementation, adding a few entries may not actually copy the entire original Map. This is one of the advantages to the immutable data structure approach: Scala will try to get away with copying as little as possible.
This kind of behavior is easiest to see with a List. If I have a val a = List(1,2,3), then that list is stored in memory. However, if I prepend an additional element like val b = 0 :: a, I do get a new 4-element List back, but Scala did not copy the orignal list a. Instead, we just created one new link, called it b, and gave it a pointer to the existing List a.
You can envision strategies like this for other kinds of collections as well. For example, if I add one element to a Map, the collection could simply wrap the existing map, falling back to it when needed, all while providing an API as if it were a single Map.
Using a mutable object is not bad in itself, it becomes bad in a functional programming environment, where you try to avoid side-effects by keeping functions pure and objects immutable.
However, if you create a mutable object inside a function and modify this object, the function is still pure if you don't release a reference to this object outside the function. It is acceptable to have code like:
def buildVector( x: Double, y: Double, z: Double ): Vector[Double] = {
val ary = Array.ofDim[Double]( 3 )
ary( 0 ) = x
ary( 1 ) = y
ary( 2 ) = z
ary.toVector
}
Now, I think this approach is useful/recommended in two cases: (1) Performance, if creating and modifying an immutable object is a bottleneck of your whole application; (2) Code readability, because sometimes it's easier to modify a complex object in place (rather than resorting to lenses, zippers, etc.)
In addition to dhg's answer, you can take a look to the performance of the scala collections. If an add/remove operation doesn't take a linear time, it must do something else than just simply copying the entire structure. (Note that the converse is not true: it's not beacuase it takes linear time that your copying the whole structure)
I like to use collections.maps as the declared parameter types (input or return values) rather than mutable or immutable maps. The Collections maps are immutable interfaces that work for both types of implementations. A consumer method using a map really doesn't need to know about a map implementation or how it was constructed. (It's really none of its business anyway).
If you go with the approach of hiding a map's particular construction (be it mutable or immutable) from the consumers who use it then you're still getting an essentially immutable map downstream. And by using collection.Map as an immutable interface you completely remove all the ".toMap" inefficiency that you would have with consumers written to use immutable.Map typed objects. Having to convert a completely constructed map into another one simply to comply to an interface not supported by the first one really is absolutely unnecessary overhead when you think about it.
I suspect in a few years from now we'll look back at the three separate sets of interfaces (mutable maps, immutable maps, and collections maps) and realize that 99% of the time only 2 are really needed (mutable and collections) and that using the (unfortunately) default immutable map interface really adds a lot of unnecessary overhead for the "Scalable Language".

How is immutability practically implemented in the design of Scala applications?

Being new to scala and a current java developer, scala was designed to encourage the use of immutability to class design.
How does this translate practically to the design of classes? The only thing that is brought to my mind is case classes. Are case classes strongly encouraged for defining data? Example? How else is immutability encouraged in Scala design of classes?
As a java developer, classes defining data were mutable. The equivalent Scala classes should be defined as case classes?
Well, case classes certainly help, but the biggest contributor is probably the collection library. The default collections are immutable, and the methods are geared toward manipulating collections by producing new ones instead of mutating. Since the immutable collections are persistent, that doesn't require copying the whole collection, which is something one often has to do in Java.
Beyond that, for-comprehensions are monadic comprehensions, which is helpful in doing immutable tasks, there's tail recursion optimization, which is very important in immutable algorithms, and general attention to immutability in many libraries, such as parser combinators and xml.
Finally, note that you have to ask for a var to get some mutability. Parameters are immutable, and val is just as short as var. Contrast this with Java, where parameters are mutable, and you need to add a final keyword to get immutability. Whereas in Scala it is as easy or easier to stay immutable, in Java it is easier to stay mutable.
Addendum
Persistent data structures are data structures that share parts between modified versions of it. This might be a bit difficult to understand, so let's consider Scala's List, which is pretty basic and easy to understand.
A Scala List is composed of two classes, known as cons and Nil. The former is actually written :: in Scala, but I'll refer to it by the traditional name.
Nil is the empty list. It doesn't contain anything. Methods that depend on the list not being empty, such as head and tail throw exceptions, while others work ok.
Naturally, cons must then represent a non-empty list. In fact, cons has exactly two elements: a value, and a list. These elements are known as head and tail.
So a list with three elements is composed of three cons, since each cons will hold only one value, plus a Nil. It must have a Nil because a cons must point to a list. As lists are not circular, then one of the cons must point to something other than a cons.
One example of such list is this:
val list = 1 :: 2 :: 3 :: Nil
Now, the components of a Scala List are immutable. One cannot change neither the value nor the list of a cons. One benefit of immutability is that you never need to copy the collection before passing or after receiving it from some other method: you know that list cannot change.
Now, let's consider what would happen if I modified that list. Let's consider two modifications: removing the first element and prepending a new element.
We can remove one element with the method tail, whose name is not a coincidence at all. So, we write:
val list2 = list.tail
And list2 will point to the same list that list's tail is pointing. Nothing at all was created: we simply reused part of list. So, let's prepend an element to list2 then:
val list3 = 0 :: list2
We created a new cons there. This new cons has a value (a head) equal to 0, and its tail points to list2. Note that both list and list3 point to the same list2. These elements are being shared by both list and list3.
There are many other persistent data structures. The very fact that the data you are manipulating is immutable makes it easy to share components.
One can find more information about this subject on the book by Chris Okasaki, Purely Functional Data Structures, or on his freely available thesis by the same name.