Make a set of random Integers Scala - scala

Is there a way in Scala to have a set of Ints be random without duplicates?
For example, I have set of Ints currently set to zero by default; a,b,c,d,e. And I want to assign a random int to each one from 1-100 while never assigning the same number to any of the variables. Thanks.

I can see two ways how this can be done.
First is the simplest one. If the range (1-100) is small enough, you can just generate every value in this range, shuffle them and take first m:
import scala.util.Random
Random.shuffle(0 until 100 toList).take(4)
result:
res0: List[Int] = List(54, 11, 35, 15)
But if range is large, this won't be very efficient (as range must be materialized in the memory). So in the case when the number of picked values (m) is much smaller than the range (n), it's more efficient to generate random values until you pick one that wasn't used before.
Here is how:
import scala.util.Random
def distinctRandomMOutOfN(m:Int, n:Int):Set[Int] = {
require(m <= n)
Stream.continually(Random.nextInt(n)).scanLeft(Set[Int]()) {
(accum, el) => accum + el
}.dropWhile(_.size < m).head
}
distinctRandomMOutOfN(4, 100)
result:
res1: Set[Int] = Set(99, 28, 82, 87)
The downside of the second approach is that if m is close to n it takes average time close to O(m²) to compute.
UPD.
So if you want a general solution that will work efficiently in any case you may use hybrid variant. Use the first approach if m is at the same order of magnitude as n (i.e. m * 2 >= n) and second variant otherwise.
This implementation will use O(m) memory and will have an average running time of O(m).

You can be confident that there are no duplicates if you simply shuffle all the possible elements and then take what you need.
import scala.util.Random
Random.shuffle(1 to 100).take(5) // res0: Vector(74, 82, 68, 24, 15)

Related

Generating a random List() of List(List(Doubles))

I'm trying to figure out how to generate a list of random doubles through the range -50 to 50, with the length of list being 20. (so 20 elements of random doubles ranging from -50 to 50).
I then want to create a fixed number (could be any number, we'll say 3 for now) of List[List[Double]] with that randomized double list. I read up on the Random doc but it is still very confusing to me. This is what I currently have:
val length: Int = 20
val doubles: List[Double] = List()
val listOf: List[List[Double]] = List(List())
val rand = new Random()
Essentially, let's say I do generate a list of 20 elements with random doubles ranging from -50 to 50. I then want to generate a random number of lists that include the randomized
list of doubles.
Ex:
val doubles: List[Double] = List(-29.3,46.8,-17.0,9.2,1.4) // in this case, doubles has a length of 5)
val listOf: List[List[Double]] = List(List(-29.3,46.8,-17.0,9.2,1.4),List(-5.0,3.4,31.5,29.0,-41.3)) // in this case, the inner lists have a length of 5, and the fixed number is
//2 because listOf has a length of 2
I am also looking to approach this problem with no mutability. How can I generate a random list of doubles with the above specs, and then generate a list of random lists?
The straight forward answer is simply:
import scala.util.Random
List.fill(3)(List.fill(20)(Random.between(-50.0, 50.0)))
The likelihood of repeating any of the random Doubles is extremely small, but if you absolutely must guarantee uniqueness, without mutation, then here's one rather inefficient solution.
import scala.util.Random
def isDistinct(lld: List[List[Double]]):Boolean =
lld.flatten.foldLeft((true, Set.empty[Double])){
case ((res, seen), dbl) => (res && !seen(dbl), seen+dbl)
}._1
LazyList.continually {
val llr = List.fill(3)(List.fill(20)(Random.between(-50.0, 50.0)))
Option.when(isDistinct(llr))(llr)
}.flatten.head
Also worth noting: between() is inclusive at the bottom (so -50.0 is unlikely but possible) and exclusive at the top (so exactly 50.0 shouldn't be possible).
Scala 2.12.x translation
def isDistinct(. . . //same
val rng = new scala.util.Random
Stream.continually {
val llr = List.fill(3)(List.fill(20)(rng.nextDouble * 100 - 50))
if (isDistinct(llr)) Some(llr) else None
}.flatten.head

Functional Programming way to calculate something like a rolling sum

Let's say I have a list of numerics:
val list = List(4,12,3,6,9)
For every element in the list, I need to find the rolling sum, i,e. the final output should be:
List(4, 16, 19, 25, 34)
Is there any transformation that allows us to take as input two elements of the list (the current and the previous) and compute based on both?
Something like map(initial)((curr,prev) => curr+prev)
I want to achieve this without maintaining any shared global state.
EDIT: I would like to be able to do the same kinds of computation on RDDs.
You may use scanLeft
list.scanLeft(0)(_ + _).tail
The cumSum method below should work for any RDD[N], where N has an implicit Numeric[N] available, e.g. Int, Long, BigInt, Double, etc.
import scala.reflect.ClassTag
import org.apache.spark.rdd.RDD
def cumSum[N : Numeric : ClassTag](rdd: RDD[N]): RDD[N] = {
val num = implicitly[Numeric[N]]
val nPartitions = rdd.partitions.length
val partitionCumSums = rdd.mapPartitionsWithIndex((index, iter) =>
if (index == nPartitions - 1) Iterator.empty
else Iterator.single(iter.foldLeft(num.zero)(num.plus))
).collect
.scanLeft(num.zero)(num.plus)
rdd.mapPartitionsWithIndex((index, iter) =>
if (iter.isEmpty) iter
else {
val start = num.plus(partitionCumSums(index), iter.next)
iter.scanLeft(start)(num.plus)
}
)
}
It should be fairly straightforward to generalize this method to any associative binary operator with a "zero" (i.e. any monoid.) It is the associativity that is key for the parallelization. Without this associativity you're generally going to be stuck with running through the entries of the RDD in a serial fashion.
I don't know what functitonalities are supported by spark RDD, so I am not sure if this satisfies your conditions, because I don't know if zipWithIndex is supported (if the answer is not helpful, please let me know by a comment and I will delete my answer):
list.zipWithIndex.map{x => list.take(x._2+1).sum}
This code works for me, it sums up the elements. It gets the index of the list element, and then adds the corresponding n first elements in the list (notice the +1, since the zipWithIndex starts with 0).
When printing it, I get the following:
List(4, 16, 19, 25, 34)

Using `quick-look` to find certain element

First of all, I want to say that this is a school assignment and I am only seeking for some guidance.
My task was to write an algorithm that finds the k:th smallest element in a seq using quickselect. This should be easy enough but when running some tests I hit a wall. For some reason if I use input (List(1, 1, 1, 1), 1) it goes into infinite loop.
Here is my implementation:
val rand = new scala.util.Random()
def find(seq: Seq[Int], k: Int): Int = {
require(0 <= k && k < seq.length)
val a: Array[Int] = seq.toArray[Int] // Can't modify the argument sequence
val pivot = rand.nextInt(a.length)
val (low, high) = a.partition(_ < a(pivot))
if (low.length == k) a(pivot)
else if (low.length < k) find(high, k - low.length)
else find(low, k)
}
For some reason (or because I am tired) I cannot spot my mistake. If someone could hint me where I go wrong I would be pleased.
Basically you are depending on this line - val (low, high) = a.partition(_ < a(pivot)) to split the array into 2 arrays. The first one containing the continuous sequence of elements smaller than pivot-element and the second contains the rest.
Then you say that if the first array has length k that means you have already seen k elements smaller that your pivot-element. Which means pivot-element is actually k+1th smallest and you are actually returning k+1th smallest element instead of kth. This is your first mistake.
Also... A greater problem occurs when you have all elements which are same because your first array will always have 0 elements.
Not only that... your code will give you wrong answer for inputs where you have repeating elements among k smallest ones like - (1, 3, 4, 1, 2).
The solution lies in obervation that in the sequence (1, 1, 1, 1) the 4th smallest element is the 4th 1. Meaning you have to use <= instead of <.
Also... Since the partition function will not split the array until your boolean condition is false, you can not use partition for achieving this array split. you will have to write the split yourself.

How to write an efficient groupBy-size filter in Scala, can be approximate

Given a List[Int] in Scala, I wish to get the Set[Int] of all Ints which appear at least thresh times. I can do this using groupBy or foldLeft, then filter. For example:
val thresh = 3
val myList = List(1,2,3,2,1,4,3,2,1)
myList.foldLeft(Map[Int,Int]()){case(m, i) => m + (i -> (m.getOrElse(i, 0) + 1))}.filter(_._2 >= thresh).keys
will give Set(1,2).
Now suppose the List[Int] is very large. How large it's hard to say but in any case this seems wasteful as I don't care about each of the Ints frequencies, and I only care if they're at least thresh. Once it passed thresh there's no need to check anymore, just add the Int to the Set[Int].
The question is: can I do this more efficiently for a very large List[Int],
a) if I need a true, accurate result (no room for mistakes)
b) if the result can be approximate, e.g. by using some Hashing trick or Bloom Filters, where Set[Int] might include some false-positives, or whether {the frequency of an Int > thresh} isn't really a Boolean but a Double in [0-1].
First of all, you can't do better than O(N), as you need to check each element of your initial array at least once. You current approach is O(N), presuming that operations with IntMap are effectively constant.
Now what you can try in order to increase efficiency:
update map only when current counter value is less or equal to threshold. This will eliminate huge number of most expensive operations — map updates
try faster map instead of IntMap. If you know that values of the initial List are in fixed range, you can use Array instead of IntMap (index as the key). Another possible option will be mutable HashMap with sufficient initail capacity. As my benchmark shows it actually makes significant difference
As #ixx proposed, after incrementing value in the map, check whether it's equal to 3 and in this case add it immediately to result list. This will save you one linear traversing (appears to be not that significant for large input)
I don't see how any approximate solution can be faster (only if you ignore some elements at random). Otherwise it will still be O(N).
Update
I created microbenchmark to measure the actual performance of different implementations. For sufficiently large input and output Ixx's suggestion regarding immediately adding elements to result list doesn't produce significant improvement. However similar approach could be used to eliminate unnecessary Map updates (which appears to be the most expensive operation).
Results of benchmarks (avg run times on 1000000 elems with pre-warming):
Authors solution:
447 ms
Ixx solution:
412 ms
Ixx solution2 (eliminated excessive map writes):
150 ms
My solution:
57 ms
My solution involves using mutable HashMap instead of immutable IntMap and includes all other possible optimizations.
Ixx's updated solution:
val tuple = (Map[Int, Int](), List[Int]())
val res = myList.foldLeft(tuple) {
case ((m, s), i) =>
val count = m.getOrElse(i, 0) + 1
(if (count <= 3) m + (i -> count) else m, if (count == thresh) i :: s else s)
}
My solution:
val map = new mutable.HashMap[Int, Int]()
val res = new ListBuffer[Int]
myList.foreach {
i =>
val c = map.getOrElse(i, 0) + 1
if (c == thresh) {
res += i
}
if (c <= thresh) {
map(i) = c
}
}
The full microbenchmark source is available here.
You could use the foldleft to collect the matching items, like this:
val tuple = (Map[Int,Int](), List[Int]())
myList.foldLeft(tuple) {
case((m, s), i) => {
val count = (m.getOrElse(i, 0) + 1)
(m + (i -> count), if (count == thresh) i :: s else s)
}
}
I could measure a performance improvement of about 40% with a small list, so it's definitely an improvement...
Edited to use List and prepend, which takes constant time (see comments).
If by "more efficiently" you mean the space efficiency (in extreme case when the list is infinite), there's a probabilistic data structure called Count Min Sketch to estimate the frequency of items inside it. Then you can discard those with frequency below your threshold.
There's a Scala implementation from Algebird library.
You can change your foldLeft example a bit using a mutable.Set that is build incrementally and at the same time used as filter for iterating over your Seq by using withFilter. However, because I'm using withFilteri cannot use foldLeft and have to make do with foreach and a mutable map:
import scala.collection.mutable
def getItems[A](in: Seq[A], threshold: Int): Set[A] = {
val counts: mutable.Map[A, Int] = mutable.Map.empty
val result: mutable.Set[A] = mutable.Set.empty
in.withFilter(!result(_)).foreach { x =>
counts.update(x, counts.getOrElse(x, 0) + 1)
if (counts(x) >= threshold) {
result += x
}
}
result.toSet
}
So, this would discard items that have already been added to the result set while running through the Seq the first time, because withFilterfilters the Seqin the appended function (map, flatMap, foreach) rather than returning a filtered Seq.
EDIT:
I changed my solution to not use Seq.count, which was stupid, as Aivean correctly pointed out.
Using Aiveans microbench I can see that it is still slightly slower than his approach, but still better than the authors first approach.
Authors solution
377
Ixx solution:
399
Ixx solution2 (eliminated excessive map writes):
110
Sascha Kolbergs solution:
72
Aivean solution:
54

Carry on information about previous computations

I'm new to functional programming, so some problems seems harder to solve using functional approach.
Let's say I have a list of numbers, like 1 to 10.000, and I want to get the items of the list which sums up to at most a number n (let's say 100). So, it would get the numbers until their sum is greater than 100.
In imperative programming, it's trivial to solve this problem, because I can keep a variable in each interaction, and stop once the objective is met.
But how can I do the same in functional programming? Since the sum function operates on completed lists, and I still don't have the completed list, how can I 'carry on' the computation?
If sum was lazily computed, I could write something like that:
(1 to 10000).sum.takeWhile(_ < 100)
P.S.:Even though any answer will be appreciated, I'd like one that doesn't compute the sum each time, since obviously the imperative version will be much more optimal regarding speed.
Edit:
I know that I can "convert" the imperative loop approach to a functional recursive function. I'm more interested in finding if one of the existing library functions can provide a way for me not to write one each time I need something.
Use Stream.
scala> val ss = Stream.from(1).take(10000)
ss: scala.collection.immutable.Stream[Int] = Stream(1, ?)
scala> ss.scanLeft(0)(_ + _)
res60: scala.collection.immutable.Stream[Int] = Stream(0, ?)
scala> res60.takeWhile(_ < 100).last
res61: Int = 91
EDIT:
Obtaining components is not very tricky either. This is how you can do it:
scala> ss.scanLeft((0, Vector.empty[Int])) { case ((sum, compo), cur) => (sum + cur, compo :+ cur) }
res62: scala.collection.immutable.Stream[(Int, scala.collection.immutable.Vector[Int])] = Stream((0,Vector()), ?)
scala> res62.takeWhile(_._1 < 100).last
res63: (Int, scala.collection.immutable.Vector[Int]) = (91,Vector(1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13))
The second part of the tuple is your desired result.
As should be obvious, in this case, building a vector is wasteful. Instead we can only store the last number from the stream that contributed to sum.
scala> ss.scanLeft(0)(_ + _).zipWithIndex
res64: scala.collection.immutable.Stream[(Int, Int)] = Stream((0,0), ?)
scala> res64.takeWhile(_._1 < 100).last._2
res65: Int = 13
The way I would do this is with recursion. On each call, add the next number. Your base case is when the sum is greater than 100, at which point you return all the way up the stack. You'll need a helper function to do the actual recursion, but that's no big deal.
This isn't hard using "functional" methods either.
Using recursion, rather than maintaining your state in a local variable that you mutate, you keep it in parameters and return values.
So, to return the longest initial part of a list whose sum is at most N:
If the list is empty, you're done; return the empty list.
If the head of the list is greater than N, you're done; return the empty list.
Otherwise, let H be the head of the list.
All we need now is the initial part of the tail of the list whose sum is at most N - H, then we can "cons" H onto that list, and we're done.
We can compute this recursively using the same procedure as we have used this far, so it's an easy step.
A simple pseudocode solution:
sum_to (n, ls) = if isEmpty ls or n < (head ls)
then Nil
else (head ls) :: sum_to (n - head ls, tail ls)
sum_to(100, some_list)
All sequence operations which require only one pass through the sequence can be implemented using folds our reduce like it is sometimes called.
I find myself using folds very often since I became used to functional programming
so here odd one possible approach
Use an empty collection as initial value and fold according to this strategy
Given the processed collection and the new value check if their sum is low enough and if then spend the value to the collection else do nothing
that solution is not very efficient but I want to emphasize the following
map fold filter zip etc are the way to get accustomed to functional programming try to use them as much as possible instead of loping constructs or recursive functions your code will be more declarative and functional