Adding Element to End of Vector - scala

Scaladocs explain how to add an element to a Vector.
def :+(elem: A): Vector[A]
[use case] A copy of this vector with an element appended.
Example:
scala> Vector(1,2) :+ 3
res12: scala.collection.immutable.Vector[Int] = Vector(1, 2, 3)
For a large collection, it seems expensive to copy the whole Vector, and then add an element to it.
What's the best(fastest) way to add an element to a Vector?

Concatenation to an immutable Vector is O(logN). Take a look at this paper to see how it is done.
http://infoscience.epfl.ch/record/169879/files/RMTrees.pdf

If you're going to be doing a lot of appends you should use a Queue as it guarantees constant time append. For information on the time complexity of collections you can refer to this cheat sheet.
http://www.scala-lang.org/docu/files/collections-api/collections_40.html

Appending to a vector in Scala takes effectively constant time. The vector is copied in the sense that many of its data structures are reused, not in the sense that all of the elements are copied into a new vector. See the link provided by coltfred for more information about time complexity of collections:
http://www.scala-lang.org/docu/files/collections-api/collections_40.html

Related

erase last element of a vector by reference

I would link to shrink a vector, which is passed by reference.
To do so, I use erase on the last element(s):
require(Rcpp)
cppFunction("void f(IntegerVector &x){
x[1] = 3;
}")
cppFunction("void g(IntegerVector &x){
x.erase(1);
}")
a <- c(1L, 2L)
a
f(a)
a
g(a)
a
Output is:
[1] 1 2
[1] 1 3
[1] 1 3
Function f shows that I can make it to pass by reference.
However, for some reason I cannot remove the last element with function g.
(I can check that the size of the vector does decrease in C++.)
So either:
It is currently not possible with Rcpp (for good reasons, I am sure, such as memory reallocation).
I do it wrong.
So is it possible to remove the last element(s) of a vector in C++, and get it back in R?
EDIT:
I want to pass by reference because I use large data sets (here, a tibble with hundreds of millions of rows).
Copying is non-optimal here.

Creating graph from text file functionally using scala

I am new to functional programming paradigm and Scala. I am trying to solve a problem using scala. I have a text file containing graph edges in following format:
3, 5
4, 6
7, 8
where 3,5 represents an edge from 3 to 5 in the graph
I am using a type of Map[Vertex,List[Vertex]] to handle graphs. My approach is to read line by line using foreach and process it, which I think is not a functional way to do it. Any help in this is appreciated.
I will leave the file reading to you, as there are many ways to do it depending on your particular application. Here is one source you might find useful for it.
Assuming you've managed to read the file into an Array[(Int, Int)], i.e., an array of tuples, in your example Array((3,5), (4,6), (7,8)), we can turn it into the the adjacency map you're looking for as follows:
arr.groupBy(_._1).mapValues(arr => arr.map(_._2))
Explanation:
We first group the tuples by first element (._1). This produces a Map[Int, Array[(Int, Int)]] representing a map from each vertex to all its edges.
Next, since the we transform the arrays to not contain the full edge information (u,v) but only the neighbour vertex v corresponding to that edge.
And we're done!
NB: This is assuming your graph is directed. If you want to turn it into an undirected graph, you can do this simply by adding (v,u) for every (u,v).

How to turn a known structured RDD to Vector

Assuming I have an RDD containing (Int, Int) tuples.
I wish to turn it into a Vector where first Int in tuple is the index and second is the value.
Any Idea how can I do that?
I update my question and add my solution to clarify:
My RDD is already reduced by key, and the number of keys is known.
I want a vector in order to update a single accumulator instead of multiple accumulators.
There for my final solution was:
reducedStream.foreachRDD(rdd => rdd.collect({case (x: Int,y: Int) => {
val v = Array(0,0,0,0)
v(x) = y
accumulator += new Vector(v)
}}))
Using Vector from accumulator example in documentation.
rdd.collectAsMap.foldLeft(Vector[Int]()){case (acc, (k,v)) => acc updated (k, v)}
Turn the RDD into a Map. Then iterate over that, building a Vector as we go.
You could use justt collect(), but if there are many repetitions of the tuples with the same key that might not fit in memory.
One key thing: do you really need Vector? Map could be much more suitable.
If you really need local Vector, you first need to use .collect() and then do local transformations into Vector. Of course you shall have enough memory for this. But here the real problem is where to find Vector which can be built efficiently from pairs of (index, value). As far as I know Spark MLLib has itself class org.apache.spark.mllib.linalg.Vectors which can create Vector from array of indices and values and even from tuples. Under the hood it uses breeze.linalg. So probably it would be best start for you.
If you just need order, you just can use .orderByKey() as you already have RDD[(K,V)]. This way you have ordered stream. Which does not strictly follow your intention but maybe it could suit even better. Now you can drop elements with the same key by .reduceByKey() producing only resulting elements.
Finally if you really need large vector, do .orderByKey and then you can produce real vector by doing .flatmap() which maintain counter and drops more than one element for the same index / inserts needed amount of 'default' elements for missing indices.
Hope this is clear enough.

What is the running time for size on Scala's ListBuffer?

Is it constant or O(n)? If O(n) are there similar data structures with constant time size operations?
Strangely, size and length have different descriptions in the ListBuffer docs. For sure, ListBuffer.length is constant time. Prior to Scala 2.8, length was indeed O(n), but this is now fixed. The implementation of size in TraversableOnce suggests that it is O(n), but I may be missing something.
Other performance characteristics of Scala collections are documented here. For ListBuffer specifically,
head tail apply update prepend append insert
ListBuffer C L L L C C L
where C is constant and L is linear time.
Edit : Both ListBuffer length and size are now O(1) - The issue mentioned by #KiptonBarros was closed with Scala 2.9.1 see: https://issues.scala-lang.org/browse/SI-4933

optimal way of storing multidimensional array/tensor

I am trying to create a tensor (can be conceived as a multidimensional array) package in scala. So far I was storing the data in a 1D Vector and doing index arithmetic.
But slicing and subarrays are not so easy to get. One needs to do a lot of arithmetic to convert multidimensional indices to 1D indices.
Is there any optimal way of storing a multidimensional array? If not, i.e. 1D array is the best solution, how one can optimally slice arrays (some concrete code would really help me)?
The key to answering this question is: when is pointer indirection faster than arithmetic? The answer is pretty much never. In-order traversals can be about equally fast for 2D, and things get worse from there:
2D random access
Array of Arrays - 600 M / second
Multiplication - 1.1 G / second
3D in-order
Array of Array of Arrays - 2.4G / second
Multiplication - 2.8 G / second
(etc.)
So you're better off just doing the math.
Now the question is how to do slicing. Initially, if you have dimensions of n1, n2, n3, ... and indices of i1, i2, i3, ..., you compute the offset into the array by
i = i1 + n1*(i2 + n2*(i3 + ... ))
where typically i1 is chosen to be the last (innermost) dimension (but in general it should be the dimension most often in the innermost loop). That is, if it were an array of arrays of (...), you would index into it as a(...)(i3)(i2)(i1).
Now suppose you want to slice this. First, you might give an offset o1, o2, o3 to every index:
i = (i1 + o1) + n1*((i2 + o2) + n2*((i3 + o3) + ...))
and then you will have a shorter range on each (let's call these m1, m2, m3, ...).
Finally, if you eliminate a dimension entirely--let's say, for example, that m2 == 1, meaning that i2 == 0, you just simplify the formula:
i = (i1 + o1 + n1*o2) + (n1+n2)*((i3 + o3) + ... ))
I will leave it as an exercise to the reader to figure out how to do this in general, but note that we can store new constants o1 + n1*o21 and n1+n2 so we don't need to keep doing that math on the slice.
Finally, if you are allowing arbitrary dimensions, you just put that math into a while loop. This does, admittedly, slow it down a little bit, but you're still at least as well off as if you'd used a pointer dereference (in almost every case).
From my own general experience: If you have to write a multidimensional (rectangular) array class yourself, do not aim to store the data as Array[Array[Double]] but use a one-dimensional storage and add helper methods for converting the multidimensional access tuples to a simple index and vice versa.
When using lists of lists, you need to do much to much bookkeeping that all lists are of the same size and you need to be careful when assigning a sublist to another sublist (because this makes the assigned to sublist identical to the first and you wonder why changing the item at (0,5) also changes (3,5)).
Of course, if you expect a certain dimension to be sliced much more often than another and you want to have reference semantics for that dimension as well, a list of lists will be the better solution, as you may pass around those inner lists as a slice to the consumer without making any copy. But if you don’t expect that, it is a better solution to add a proxy class for the slices which maps to the multidimensional array (which in turn maps to the one-dimensional storage array).
Just an idea: how about a map with Int-tuples as keys?
Example:
val twoDimMatrix = Map((1,1) -> -1, (1,2) -> 5, (2,1) -> 7.7, (2,2) -> 9)
and then you could
scala> twoDimMatrix.filterKeys{_._2 == 1}.values
res1: Iterable[AnyVal] = MapLike(-1, 7.7)
or
twoDimMatrix.filterKeys{tuple => { val (dim1, dim2) = tuple; dim1 == dim2}} //diagonal
this way the index arithmetics would be done by the map. I don't know how practical and fast this is though.
As soon as the number of dimension is known before the design, you can use a collection of collection ...(n times) of collection. If you must be able to build a verctor for any number of dimension, then, there's nothing convenient in the scala API to do it (as far as I know).
You can simply store information in a mulitdimensional array (eg. `Array[Array[Double]]).
If the tensors are small and can fit in cache, you can have a performance improvement with 1D arrays because of data memory locality. It should also be faster to copy the whole tensor.
For slicing arithmetic. It depends what kind of slicing you require. I suppose you already have a function for extracting an element based on indices. So write a basic splicing loop based on indices iteration, insert manually the expression for extracting element, and then try to simplify the whole loop. It is often simpler than to write a correct expression from scratch.