Scala, Spark: find element-wise average of N maps - scala

I have N maps (Map[String, Double]) each having the same set of keys. Let's say something like the following:
map1 = ("elem1": 2.0, "elem2": 4.0, "elem3": 3.0)
map2 = ("elem1": 4.0, "elem2": 1.0, "elem3": 1.0)
map3 = ("elem1": 3.0, "elem2": 10.0, "elem3": 2.0)
I need to return a new map with element-wise average of those input maps:
resultMap = ("elem1": 3.0, "elem2": 5.0, "elem3": 2.0)
What's the cleanest way to do that in scala? Preferrably without using extra external libraries.
This all happens in Spark*. Thus any answers suggesting spark-specific usage could be helpful.

One option is to convert all Maps to Seqs, union them to a single Seq, group by key and take the average of values:
val maps = Seq(map1, map2, map3)
maps.map(_.toSeq).reduce(_++_).groupBy(_._1).mapValues(x => x.map(_._2).sum/x.length)
// res6: scala.collection.immutable.Map[String,Double] = Map(elem1 -> 3.0, elem3 -> 2.0, elem2 -> 5.0)

Since your question is tagged with apache-spark you can get your desired output by combining the maps into RDD[Map[String, Double]] as
scala> val rdd = sc.parallelize(Seq(Map("elem1"-> 2.0, "elem2"-> 4.0, "elem3"-> 3.0),Map("elem1"-> 4.0, "elem2"-> 1.0, "elem3"-> 1.0),Map("elem1"-> 3.0, "elem2"-> 10.0, "elem3"-> 2.0)))
rdd: org.apache.spark.rdd.RDD[scala.collection.immutable.Map[String,Double]] = ParallelCollectionRDD[1] at parallelize at <console>:24
Then you can use flatMap to flatten the entries of maps into individual rows and use groupBy function with key and sum the grouped values and devide it with the size of the grouped maps. You should get Your desired output as
scala> rdd.flatMap(row => row).groupBy(kv => kv._1).mapValues(values => values.map(value => value._2).sum/values.size)
res0: org.apache.spark.rdd.RDD[(String, Double)] = MapPartitionsRDD[5] at mapValues at <console>:27
scala> res0.foreach(println)
[Stage 0:> (0 + 0) / 4](elem2,5.0)
(elem3,2.0)
(elem1,3.0)
Hope the answer is helpful

Related

Reduce Hashmaps per partition in Spark

I have a RDD of some mutable.Map[(Int, Array[Double])] and I would like to reduce the maps by Int and find the means of the elements of the arrays.
For example I have:
Map[(1, Array[0.1, 0.1]), (2, Array[0.3, 0.2])]
Map[(1, Array[0.1, 0.4])]
What I want:
Map[(1, Array[0.1, 0.25]), (2, Array[0.3, 0.2])]
The problem is that I don't know how reduce works between maps and additionally I have to do it per partition, collect the results to the driver and reduce them there too. I found the foreachPartition method but I don't know if it is meant to be used in such cases.
Any ideas?
You can do it using combineByKey :
val rdd = ss.sparkContext.parallelize(Seq(
Map((1, Array(0.1, 0.1)), (2, Array(0.3, 0.2))),
Map((1, Array(0.1, 0.4)))
))
// functions for combineByKey
val create = (arr: Array[Double]) => arr.map( x => (x,1))
val update = (acc : Array[(Double,Int)], current: Array[Double]) => acc.zip(current).map{case ((s,c),x) => (s+x,c+1)}
val merge = (acc1 : Array[(Double,Int)],acc2:Array[(Double,Int)]) => acc1.zip(acc2).map{case ((s1,c1),(s2,c2)) => (s1+s2,c1+c2)}
val finalMap = rdd.flatMap(_.toList)
// aggreate elementwise sum & count
.combineByKey(create,update,merge)
// calculate elementwise average per key
.map{case (id,arr) => (id,arr.map{case (s,c) => s/c})}
.collectAsMap()
// finalMap = Map(2 -> Array(0.3, 0.2), 1 -> Array(0.1, 0.25))

Scala - Reduce list of tuples by key

I have list of tuples which contains userId and point. I want to combine or reduce this list by the key.
val points: List[(Int, Double)] = List(
(1, 1.0),
(2, 3.2),
(4, 2.0),
(1, 4.0),
(2, 6.8)
)
The expected result should look like:
List((1, 5.0), (2, 10.0), (4, 2.0))
I tried with groupBy and mapValue, but got an error:
val aggrPoint: Map[Int, Double] = incomes.groupBy(_._1).mapValues(seq => seq.reduce(_._2 + _._2))
Error:(16, 180) type mismatch;
found : Double
required: (Int, Double)
What am I doing wrong, and is there a idiomatic way to achieve this?
P.S) I found that in Spark aggregateByKey does this job. But, is there a built-in method in Scala?
What am I doing wrong, and is there a idiomatic way to achieve this?
let's go step by step to see what are you doing wrong. (I am going to use REPL)
first of all lets define the points
scala> val points: List[(Int, Double)] = List(
| (1, 1.0),
| (2, 3.2),
| (4, 2.0),
| (1, 4.0),
| (2, 6.8)
| )
points: List[(Int, Double)] = List((1,1.0), (2,3.2), (4,2.0), (1,4.0), (2,6.8))
As you can see that you have List[Tuple2[Int, Double]] so when you do groupBy and mapValues as
scala> points.groupBy(_._1).mapValues(seq => println(seq))
List((2,3.2), (2,6.8))
List((4,2.0))
List((1,1.0), (1,4.0))
res1: scala.collection.immutable.Map[Int,Unit] = Map(2 -> (), 4 -> (), 1 -> ())
You can see that seq object is of List[Tuple2[Int, Double]] again but only contains the grouped tuples as list.
So when you apply seq.reduce(_._2 + _._2), the reduce function takes two inputs of Tuple2[Int, Double] but the output is Double only which doesn't match for the next iteration on seq as the expected input is Tuple2[Int, Double]. Thats the main issue. All you have to do is match the input and output types for reduce function
One way would be to match Tuple2[Int, Double] as
scala> points.groupBy(_._1).mapValues(seq => seq.reduce{(x,y) => (x._1, x._2 + y._2)})
res6: scala.collection.immutable.Map[Int,(Int, Double)] = Map(2 -> (2,10.0), 4 -> (4,2.0), 1 -> (1,5.0))
But this isn't your desired output, so you can extract the double value from the reduced Tuple2[Int, Double] as
scala> points.groupBy(_._1).mapValues(seq => seq.reduce{(x,y) => (x._1, x._2 + y._2)}._2)
res8: scala.collection.immutable.Map[Int,Double] = Map(2 -> 10.0, 4 -> 2.0, 1 -> 5.0)
or you can just use map before you apply reduce function as
scala> points.groupBy(_._1).mapValues(seq => seq.map(_._2).reduce(_ + _))
res3: scala.collection.immutable.Map[Int,Double] = Map(2 -> 10.0, 4 -> 2.0, 1 -> 5.0)
I hope the explanation is clear enough to understand your mistake and you must have understood how a reduce function works
You can map the tuples in the mapValues to their 2nd elements then sum them as follows:
points.groupBy(_._1).mapValues( _.map(_._2).sum ).toList
// res1: List[(Int, Double)] = List((2,10.0), (4,2.0), (1,5.0))
Using collect
points.groupBy(_._1).collect{
case e => e._1 -> e._2.map(_._2).sum
}.toList
//res1: List[(Int, Double)] = List((2,10.0), (4,2.0), (1,5.0))

How to Sum a part of a list in RDD

I have an RDD, and I would like to sum a part of the list.
(key, element2 + element3)
(1, List(2.0, 3.0, 4.0, 5.0)), (2, List(1.0, -1.0, -2.0, -3.0))
output should look like this,
(1, 7.0), (2, -3.0)
Thanks
You can map and indexing on the second part:
yourRddOfTuples.map(tuple => {val list = tuple._2; list(1) + list(2)})
Update after your comment, convert it to Vector:
yourRddOfTuples.map(tuple => {val vs = tuple._2.toVector; vs(1) + vs(2)})
Or if you do not want to use conversions:
yourRddOfTuples.map(_._2.drop(1).take(2).sum)
This skips the first element (.drop(1)) from the second element of the tuple (.map(_._2), takes the next two (.take(2)) (might be less if you have less) and sums them (.sum).
You can map the key-list pair to obtain the 2nd and 3rd list elements as follows:
val rdd = sc.parallelize(Seq(
(1, List(2.0, 3.0, 4.0, 5.0)),
(2, List(1.0, -1.0, -2.0, -3.0))
))
rdd.map{ case (k, l) => (k, l(1) + l(2)) }.collect
// res1: Array[(Int, Double)] = Array((1,7.0), (2,-3.0))

Displaying output under a certain format

I'm quite new to Scala and Spark, and had some questions about displaying results in output file.
I actually have a Map in which each key is associated to a List of List (Map[Int, List<Double>]), such as :
(2, List(x1,x2,x3), List(y1,y2,y3), ...).
I am supposed to display for each key the values inside the lists of lists, such as:
2 x1,x2,x3
2 y1,y2,y3
1 z1,z2,z3
and so on.
When I use the saveAsTextFile function, it doesn't give me what I want in the output. Does anybody know how I can do it?
EDIT :
This is one of my function :
def PrintCluster(vectorsByKey : Map[Int, List[Double]], vectCentroidPairs : Map[Int, Int]) : Map[Int, List[Double]] = {
var vectorsByCentroid: Map[Int, List[Double]] = Map()
val SortedCentroid = vectCentroidPairs.groupBy(_._2).mapValues(x => x.map(_._1).toList).toSeq.sortBy(_._1).toMap
SortedCentroid.foreach { case (centroid, vect) =>
var nbVectors = vect.length
for (i <- 0 to nbVectors - 1) {
var vectValues = vectorsByKey(vect(i))
println(centroid + " " + vectValues)
vectorsByCentroid += (centroid -> (vectValues))
}
}
return vectorsByCentroid
}
I know it's wrong, because I only can affect one unique keys for a group of values. That is why it returns me only the first List for each key in the Map. I thought that for using the saveAsTextFile function, I've had necessarily to use a Map structure, but I don't really know.
create sample rdd as per your input data
val rdd: RDD[Map[Int, List[List[Double]]]] = spark.sparkContext.parallelize(
Seq(Map(
2 -> List(List(-4.4, -2.0, 1.5), List(-3.3, -5.4, 3.9), List(-5.8, -3.3, 2.3), List(-5.2, -4.0, 2.8)),
1 -> List(List(7.3, 1.0, -2.0), List(9.8, 0.4, -1.0), List(7.5, 0.3, -3.0), List(6.1, -0.5, -0.6), List(7.8, 2.2, -0.7), List(6.6, 1.4, -1.1), List(8.1, -0.0, 2.7)),
3 -> List(List(-3.0, 4.0, 1.4), List(-4.0, 3.9, 0.8), List(-1.4, 4.3, -0.5), List(-1.6, 5.2, 1.0)))
)
)
Transform RDD[Map[Int, List[List[Double]]]] to RDD[(Int, String)]
val result: RDD[(Int, String)] = rdd.flatMap(i => {
i.map {
case (x, y) => y.map(list => (x, list.mkString(" ")))
}
}).flatMap(z => z)
result.foreach(println)
result.saveAsTextFile("location")
Using a Map[Int, List[List[Double]]] and simply print it in the format wanted is simple, it can be done by first converting to a list and then applying flatMap. Using the data supplied in a comment:
val map: Map[Int, List[List[Double]]] = Map(
2 -> List(List(-4.4, -2.0, 1.5), List(-3.3, -5.4, 3.9), List(-5.8, -3.3, 2.3), List(-5.2, -4.0, 2.8)),
1 -> List(List(7.3, 1.0, -2.0), List(9.8, 0.4, -1.0), List(7.5, 0.3, -3.0), List(6.1, -0.5, -0.6), List(7.8, 2.2, -0.7), List(6.6, 1.4, -1.1), List(8.1, -0.0, 2.7)),
3 -> List(List(-3.0, 4.0, 1.4), List(-4.0, 3.9, 0.8), List(-1.4, 4.3, -0.5), List(-1.6, 5.2, 1.0))
)
val list = map.toList.flatMap(t => t._2.map((t._1, _)))
val result = for (t <- list) yield t._1 + "\t" + t._2.mkString(",")
// Saving the result to file
import java.io._
val pw = new PrintWriter(new File("fileName.txt"))
result.foreach{ line => pw.println(line)}
pw.close
Will print out:
2 -4.4,-2.0,1.5
2 -3.3,-5.4,3.9
2 -5.8,-3.3,2.3
2 -5.2,-4.0,2.8
1 7.3,1.0,-2.0
1 9.8,0.4,-1.0
1 7.5,0.3,-3.0
1 6.1,-0.5,-0.6
1 7.8,2.2,-0.7
1 6.6,1.4,-1.1
1 8.1,-0.0,2.7
3 -3.0,4.0,1.4
3 -4.0,3.9,0.8
3 -1.4,4.3,-0.5
3 -1.6,5.2,1.0

Scala select n-th element of the n-th element of an array / RDD

I have the following RDD:
val a = List((3, 1.0), (2, 2.0), (4, 2.0), (1,0.0))
val rdd = sc.parallelize(a)
Ordering the tuple elements by their right hand component in ascending order I would like to:
Pick the 2nd smallest result i.e. (3, 1.0)
Select the left hand element i.e. 3
The following code does that but it is so ugly and inefficient that I was wondering if someone could suggest something better.
val b = ((rdd.takeOrdered(2).zipWithIndex.map{case (k,v) => (v,k)}).toList find {x => x._1 == 1}).map(x => x._2).map(x=> x._1)
Simply:
implicit val ordering = scala.math.Ordering.Tuple2[Double, Int]
rdd.map(_.swap).takeOrdered(2).max.map { case (k, v) => v }
Sorry, I don't know spark, but maybe a standard method from Scala collections would work?:
rdd.sortBy {case (k,v) => v -> k}.apply(2)
val a = List((3, 1.0), (2, 2.0), (4, 2.0), (1,0.0))
val rdd = sc.parallelize(a)
rdd.map(_.swap).takeOrdered(2).max._2