Scala Shuffle A List Randomly And repeat it - scala

I want to shuffle a scala list randomly.
I know i can do this by using scala.util.Random.shuffle
But here by calling this i will always get a new set of list. What i really want is that in some cases i want the shuffle to give me the same output always. So how can i achieve that?
Basically what i want to do is to shuffle a list randomly at first and then repeat it in the same order. For the first time i want to generate the list randomly and then based on some parameter repeat the same shuffling.

Use setSeed() to seed the generator before shuffling. Then if you want to repeat a shuffle reuse the seed.
For example:
scala> util.Random.setSeed(41L) // some seed chosen for no particular reason
scala> util.Random.shuffle(Seq(1,2,3,4))
res0: Seq[Int] = List(2, 4, 1, 3)
That shuffled: 1st -> 3rd, 2nd -> 1st, 3rd -> 4th, 4th -> 2nd
Now we can repeat that same shuffle pattern.
scala> util.Random.setSeed(41L) // same seed
scala> util.Random.shuffle(Seq(2,4,1,3)) // result of previous shuffle
res1: Seq[Int] = List(4, 3, 2, 1)

Let a be the seed parameter
Let b be the how many time you want to shuffle
There are two ways to kinda of do this
you can use scala.util.Random.setSeed(a) where 'a' can be any integer so after you finish your shuffling b times you can set the 'a' seed again and then your shuffling will be in the same order as your parameter 'a'
The other way is to shuffle List(1,2,3,...a) == 1 to a b times save that as a nested list or vector and then you can map it to your iterable
val arr = List(Bob, Knight, John)
val randomer = (0 to b).map(x => scala.util.Random.shuffle((0 to arr.size))
randomer.map(x => x.map(y => arr(y)))
You can use the same randomer for you other list you want to shuffle by mapping it

Related

Functional Programming way to calculate something like a rolling sum

Let's say I have a list of numerics:
val list = List(4,12,3,6,9)
For every element in the list, I need to find the rolling sum, i,e. the final output should be:
List(4, 16, 19, 25, 34)
Is there any transformation that allows us to take as input two elements of the list (the current and the previous) and compute based on both?
Something like map(initial)((curr,prev) => curr+prev)
I want to achieve this without maintaining any shared global state.
EDIT: I would like to be able to do the same kinds of computation on RDDs.
You may use scanLeft
list.scanLeft(0)(_ + _).tail
The cumSum method below should work for any RDD[N], where N has an implicit Numeric[N] available, e.g. Int, Long, BigInt, Double, etc.
import scala.reflect.ClassTag
import org.apache.spark.rdd.RDD
def cumSum[N : Numeric : ClassTag](rdd: RDD[N]): RDD[N] = {
val num = implicitly[Numeric[N]]
val nPartitions = rdd.partitions.length
val partitionCumSums = rdd.mapPartitionsWithIndex((index, iter) =>
if (index == nPartitions - 1) Iterator.empty
else Iterator.single(iter.foldLeft(num.zero)(num.plus))
).collect
.scanLeft(num.zero)(num.plus)
rdd.mapPartitionsWithIndex((index, iter) =>
if (iter.isEmpty) iter
else {
val start = num.plus(partitionCumSums(index), iter.next)
iter.scanLeft(start)(num.plus)
}
)
}
It should be fairly straightforward to generalize this method to any associative binary operator with a "zero" (i.e. any monoid.) It is the associativity that is key for the parallelization. Without this associativity you're generally going to be stuck with running through the entries of the RDD in a serial fashion.
I don't know what functitonalities are supported by spark RDD, so I am not sure if this satisfies your conditions, because I don't know if zipWithIndex is supported (if the answer is not helpful, please let me know by a comment and I will delete my answer):
list.zipWithIndex.map{x => list.take(x._2+1).sum}
This code works for me, it sums up the elements. It gets the index of the list element, and then adds the corresponding n first elements in the list (notice the +1, since the zipWithIndex starts with 0).
When printing it, I get the following:
List(4, 16, 19, 25, 34)

Breakdown of reduce function

I currently have:
x.collect()
# Result: Array[Int] = Array(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
val a = x.reduce((x,y) => x+1)
# a: Int = 6
val b = x.reduce((x,y) => y + 1)
# b: Int = 12
I have tried to follow what has been said here (http://www.python-course.eu/lambda.php) but still don't quite understand what the individual operations are that lead to these answers.
Could anyone please explain the steps being taken here?
Thank you.
The reason is that the function (x, y) => x + 1 is not associative. reduce requires an associative function. This is necessary to avoid indeterminacy when combining results across different partitions.
You can think of the reduce() method as grabbing two elements from the collection, applying them to a function that results in a new element, and putting that new element back in the collection, replacing the two it grabbed. This is done repeatedly until there is only one element left. In other words, that previous result is going to get re-grabbed until there are no more previous results.
So you can see where (x,y) => x+1 results in a new value (x+1) which would be different from (x,y) => y+1. Cascade that difference through all the iterations and ...

What is the difference between partition and groupBy?

I am reading through Twitter's Scala School right now and was looking at the groupBy and partition methods for collections. And I am not exactly sure what the difference between the two methods is.
I did some testing on my own:
scala> List(1, 2, 3, 4, 5, 6).partition(_ % 2 == 0)
res8: (List[Int], List[Int]) = (List(2, 4, 6),List(1, 3, 5))
scala> List(1, 2, 3, 4, 5, 6).groupBy(_ % 2 == 0)
res9: scala.collection.immutable.Map[Boolean,List[Int]] = Map(false -> List(1, 3, 5), true -> List(2, 4, 6))
So does this mean that partition returns a list of two lists and groupBy returns a Map with boolean keys and list values? Both have the same "effect" of splitting a list into two different parts based on a condition. I am not sure why I would use one over the other. So, when would I use partition over groupBy and vice-versa?
groupBy is better suited for lists of more complex objects.
Say, you have a class:
case class Beer(name: String, cityOfBrewery: String)
and a List of beers:
val beers = List(Beer("Bitburger", "Bitburg"), Beer("Frueh", "Cologne") ...)
you can then group beers by cityOfBrewery:
val beersByCity = beers.groupBy(_.cityOfBrewery)
Now you can get yourself a list of all beers brewed in any city you have in your data:
val beersByCity("Cologne") = List(Beer("Frueh", "Cologne"), ...)
Neat, isn't it?
And I am not exactly sure what the difference between the two methods
is.
The difference is in their signature. partition expects a function A => Boolean while groupBy expects a function A => K.
It appears that in your case the function you apply with groupBy is A => Boolean too, but you don't want always to do this, sometimes you want to group by a function that don't always returns a boolean based on its input.
For example if you want to group a List of strings by their length, you need to do it with groupBy.
So, when would I use partition over groupBy and vice-versa?
Use groupBy if the image of the function you apply is not in the boolean set (i.e f(x) for an input x yield another result than a boolean). If it's not the case then you can use both, it's up to you whether you prefer a Map or a (List, List) as output.
Partition is when you need to split some collection into two basing on yes/no logic (even/odd numbers, uppercase/lowecase letters, you name it). GroupBy has more general usage: producing many groups, basing on some function. Let's say you want to split corpus of words into bins depending on their first letter (resulting into 26 groups), it simply not possible with .partition.

In Scala, how to get a slice of a list from nth element to the end of the list without knowing the length?

I'm looking for an elegant way to get a slice of a list from element n onwards without having to specify the length of the list. Lets say we have a multiline string which I split into lines and then want to get a list of all lines from line 3 onwards:
string.split("\n").slice(3,X) // But I don't know what X is...
What I'm really interested in here is whether there's a way to get hold of a reference of the list returned by the split call so that its length can be substituted into X at the time of the slice call, kind of like a fancy _ (in which case it would read as slice(3,_.length)) ? In python one doesn't need to specify the last element of the slice.
Of course I could solve this by using a temp variable after the split, or creating a helper function with a nice syntax, but I'm just curious.
Just drop first n elements you don't need:
List(1,2,3,4).drop(2)
res0: List[Int] = List(3, 4)
or in your case:
string.split("\n").drop(2)
There is also paired method .take(n) that do the opposite thing, you can think of it as .slice(0,n).
In case you need both parts, use .splitAt:
val (left, right) = List(1,2,3,4).splitAt(2)
left: List[Int] = List(1, 2)
right: List[Int] = List(3, 4)
The right answer is takeRight(n):
"communism is sharing => resource saver".takeRight(3)
//> res0: String = ver
You can use scala's list method 'takeRight',This will not throw exception when List's length is not enough, Like this:
val t = List(1,2,3,4,5);
t.takeRight(3);
res1: List[Int] = List(3,4,5)
If list is not longer than you want take, this will not throw Exception:
val t = List(4,5);
t.takeRight(3);
res1: List[Int] = List(4,5)
get last 2 elements:
List(1,2,3,4,5).reverseIterator.take(2)

Scala: what is the most appropriate data structure for sorted subsets?

Given a large collection (let's call it 'a') of elements of type T (say, a Vector or List) and an evaluation function 'f' (say, (T) => Double) I would like to derive from 'a' a result collection 'b' that contains the N elements of 'a' that result in the highest value under f. The collection 'a' may contain duplicates. It is not sorted.
Maybe leaving the question of parallelizability (map/reduce etc.) aside for a moment, what would be the appropriate Scala data structure for compiling the result collection 'b'? Thanks for any pointers / ideas.
Notes:
(1) I guess my use case can be most concisely expressed as
val a = Vector( 9,2,6,1,7,5,2,6,9 ) // just an example
val f : (Int)=>Double = (n)=>n // evaluation function
val b = a.sortBy( f ).take( N ) // sort, then clip
except that I do not want to sort the entire set.
(2) one option might be an iteration over 'a' that fills a TreeSet with 'manual' size bounding (reject anything worse than the worst item in the set, don't let the set grow beyond N). However, I would like to retain duplicates present in the original set in the result set, and so this may not work.
(3) if a sorted multi-set is the right data structure, is there a Scala implementation of this? Or a binary-sorted Vector or Array, if the result set is reasonably small?
You can use a priority queue:
def firstK[A](xs: Seq[A], k: Int)(implicit ord: Ordering[A]) = {
val q = new scala.collection.mutable.PriorityQueue[A]()(ord.reverse)
val (before, after) = xs.splitAt(k)
q ++= before
after.foreach(x => q += ord.max(x, q.dequeue))
q.dequeueAll
}
We fill the queue with the first k elements and then compare each additional element to the head of the queue, swapping as necessary. This works as expected and retains duplicates:
scala> firstK(Vector(9, 2, 6, 1, 7, 5, 2, 6, 9), 4)
res14: scala.collection.mutable.Buffer[Int] = ArrayBuffer(6, 7, 9, 9)
And it doesn't sort the complete list. I've got an Ordering in this implementation, but adapting it to use an evaluation function would be pretty trivial.