Recently I was asked (in a class assignment) to find the top 10 occurring words inside RDD. I submitted my assignment with a working solution which looks like
wordsRdd
.map(x => (x, 1))
.reduceByKey(_ + _)
.map(case (x, y) => (y, x))
.sortByKey(false)
.map(case (x, y) => (y, x))
.take(10)
So basically, I swap the tuple, sort by key, and then swap again. Then finally take 10. I don't find the repeated swapping very elegant.
So I wonder if there is a more elegant way of doing this.
I searched and found some people using Scala implicits to convert the RDD into a Scala Sequence and then doing the sortByValue, but I don't want to convert RDD to a Scala Seq, because that will kill the distributed nature of the RDD.
So is there a better way?
How about this:
wordsRdd.
map(x => (x, 1)).
reduceByKey(_ + _).
takeOrdered(10)(Ordering.by(-1 * _._2))
or a little bit more verbose:
object WordCountPairsOrdering extends Ordering[(String, Int)] {
def compare(a: (String, Int), b: (String, Int)) = b._2.compare(a._2)
}
wordsRdd.
map(x => (x, 1)).
reduceByKey(_ + _).
takeOrdered(10)(WordCountPairsOrdering)
Related
I have a flat map that returns the Sequence Seq((20,6),(22,6),(23,6),(24,6),(20,1),(22,1)) now I need to use the reduceByKey() on the sequence that I got from the flat map to find the minimum value for each key.
I tried using .reduceByKey(a,min(b)) and .reduceByKey((a, b) => if (a._1 < b._1) a else b) but neither of them are working.
This is my code
for(i<- 1 to 5){
var graph=graph.flatMap{ in => in match{ case (x, y, zs) => (x, y) :: zs.map(z => (z, y))}
.reduceByKey((a, b) => if (a._1 < b._1) a else b)
}
For each distinct key the flatmap generates I need to get the minimum value for that key. Eg: the flatmap generates Seq((20,6),(22,6),(23,6),(24,6),(20,1),(22,1)) the resultByKey() should generate (20,1),(22,1),(23,6),(24,6)
Here is the signature of reduceByKey:
def reduceByKey(func: (V, V) ⇒ V): RDD[(K, V)]
Basically, given a RDD of key/value pairs, you need to provide a function that reduces two values (and not the entire pair) into one. Therefore, you can use it as follows:
val rdd = sc.parallelize(Seq((20,6),(22,6),(23,6),(24,6),(20,1),(22,1)))
val result = rdd.reduceByKey((a, b) => if (a < b) a else b)
result.collect
// Array[(Int, Int)] = Array((24,6), (20,1), (22,1), (23,6))
How can I get the intersection of values in key value pairs?
I have pairs:
(p, Set(n))
in which I used reduceByKey and finally got:
(p1, Set(n1, n2)) (p2, Set(n1, n2, n3)) (p3, Set(n2, n3))
What I want is to find n that exist in all of the pairs and put them as value. For the above data, the result would by
(p1, Set(n2)) (p2, Set(n2)), (p3, Set(n2))
As long as I searched, there is no reduceByValue in spark. The only function that seemed closer to what i want was reduce() but it didn't work as the result was only one key value pair ((p3, Set(n2))).
Is there any way to solve it? Or should i think something else from the start?
Code:
val rRdd = inputFile.map(x => (x._1, Set(x._2)).reduceByKey(_++_)
val wrongRdd = rRdd.reduce{(x, y) => (x._1, x._2.intersect(y._2))}
I can see why wrongRdd is not correct, I just put it to show how (p3, Set(n2)) resulted from.
You can first reduce the sets to their intersection (say, s), then replace (k, v) with (k, s):
val rdd = sc.parallelize(Seq(
("p1", Set("n1", "n2")),
("p2", Set("n1", "n2", "n3")),
("p3", Set("n2", "n3"))
))
val s = rdd.map(_._2).reduce(_ intersect _)
// s: scala.collection.immutable.Set[String] = Set(n2)
rdd.map{ case (k, v) => (k, s) }.collect
// res1: Array[(String, scala.collection.immutable.Set[String])] = Array(
// (p1,Set(n2)), (p2,Set(n2)), (p3,Set(n2))
// )
given Iterator[(String, Int)]
I would like to group by the String and sum the Int and return the results as a Map[String, Int]
You can convert it to a list or other strict structure:
iter.toList.groupBy(_._1).mapValues(_.map(_._2).sum)
If you don't want to convert to a strict structure (which forces all of the entries into memory), you can foldLeft and build the map as you go:
(Map.empty[String,Int] /: iter) {case (acc, (k,v)) =>
acc + (k -> acc.get(k).map(_ + v).getOrElse(v))
}
I am preparing for CCA175, I am using the oldest available version of spark, Spark 1.3.0.
As shown below, I am converting the element to Float while mapping but while reducing it is showing a compile time error.
scala> val revenuePerDay = ordersJoinOrderItems.map(x => (x._2._1, (x._1, (x._2._2).toFloat)))
revenuePerDay: org.apache.spark.rdd.RDD[(String, (Int, Float))] =
MapPartitionsRDD[21] at map at <console>:31
After mapping I can see that it is mapped as Float but when I am running the below command it is showing an error:
scala> revenuePerDay.reduceByKey((x,y) => x._2._2 + y._2._2)
<console>:34: error: value _2 is not a member of Float
revenuePerDay.reduceByKey((x,y) => x._2._2 + y._2._2)
^
PairRDDFunctions.reduceByKey works on a pair of values:
def reduceByKey(func: (V, V) ⇒ V): RDD[(K, V)]
Since your tuple is of the form: (String, (Int, Float)), the key (String) isn't part of the method signature.
reduceByKey expects a function of type (V, V) => V. Since your input is of type (Int, Float), and result is of type Float, this won't work.
Instead, we'll need to use the more verbose PairRDDFunctions.combineByKey:
revenuePerDay.combineByKey[Float](_._2, (acc, x) => acc + x._2, (x, y) => x + y)
Or, you can use the slightly similar PairRDDFunctions.aggregateByKey:
revenuePerDay.aggregateByKey(0F)((acc, x) => acc + x._2, (x, y) => x + y)
Edit
Another suggestion by #zero323 is to use mapValues with reduceByKey:
revenuePerDay.mapValues(_._2).reduceByKey(_ + _)
I am an Apache Spark learner and have come across a RDD action aggregate which I have no clue of how it functions. Can some one spell out and explain in detail step by step how did we arrive at the below result for the code here
RDD input = {1,2,3,3}
RDD Aggregate function :
rdd.aggregate((0, 0))
((x, y) =>
(x._1 + y, x._2 + 1),
(x, y) =>
(x._1 + y._1, x._2 + y._2))
output : {9,4}
Thanks
If you are not sure what is going on it is best to follow the types. Omitting implicit ClassTag for brevity we start with something like this
abstract class RDD[T] extends Serializable with Logging
def aggregate[U](zeroValue: U)(seqOp: (U, T) ⇒ U, combOp: (U, U) ⇒ U): U
If you ignore all the additional parameters you'll see that aggregate is a function which maps from RDD[T] to U. It means that the type of the values in the input RDD doesn't have to be the same as the type of the output value. So it is clearly different than for example reduce:
def reduce(func: (T, T) ⇒ T): T
or fold:
def fold(zeroValue: T)(op: (T, T) => T): T
The same as fold, aggregate requires a zeroValue. How to choose it? It should be an identity (neutral) element with respect to combOp.
You also have to provide two functions:
seqOp which maps from (U, T) to U
combOp which maps from (U, U) to U
Just based on this signatures you should already see that only seqOp may access the raw data. It takes some value of type U another one of type T and returns a value of type U. In your case it is a function with a following signature
((Int, Int), Int) => (Int, Int)
At this point you probably suspect it is used for some kind of fold-like operation.
The second function takes two arguments of type U and returns a value of type U. As stated before it should be clear that it doesn't touch the original data and can operate only on the values already processed by the seqOp. In your case this function has a signature as follows:
((Int, Int), (Int, Int)) => (Int, Int)
So how can we get all of that together?
First each partition is aggregated using standard Iterator.aggregate with zeroValue, seqOp and combOp passed as z, seqop and combop respectivelly. Since InterruptibleIterator used internally doesn't override aggregate it should be executed as a simple foldLeft(zeroValue)(seqOp)
Next partial results collected from each partition are aggregated using combOp
Lets assume that input RDD has three partitions with following distribution of values:
Iterator(1, 2)
Iterator(2, 3)
Iterator()
You can expect that execution, ignoring absolute order, will be equivalent to something like this:
val seqOp = (x: (Int, Int), y: Int) => (x._1 + y, x._2 + 1)
val combOp = (x: (Int, Int), y: (Int, Int)) => (x._1 + y._1, x._2 + y._2)
Seq(Iterator(1, 2), Iterator(3, 3), Iterator())
.map(_.foldLeft((0, 0))(seqOp))
.reduce(combOp)
foldLeft for a single partition can look like this:
Iterator(1, 2).foldLeft((0, 0))(seqOp)
Iterator(2).foldLeft((1, 1))(seqOp)
(3, 2)
and over all partitions
Seq((3,2), (6,2), (0,0))
which combined will give you observed result:
(3 + 6 + 0, 2 + 2 + 0)
(9, 4)
In general this is a common pattern you will find all over Spark where you pass neutral value, a function used to process values per partition and a function used to merge partial aggregates from different partitions. Some other examples include:
aggregateByKey
User Defined Aggregate Functions
Aggregators on Spark Datasets.
Here is my understanding for your reference:
Imagine you have two nodes, one take the input of the first two list elements {1,2}, and another takes {3, 3}. (The partition here is only for convenient)
At the first node:
"(x, y) => (x._1 + y, x._2 + 1)" , the first x is (0,0) as given, and y is your first element 1, and you will have output (0+1, 0+1), then comes your second element y=2, and output (1 + 2, 1 + 1), which is (3, 2)
At the second node, same procedure happens in parallel, and you'll have (6, 2).
"(x, y) => (x._1 + y._1, x._2 + y._2)", tells you to merge two nodes, and you'll get (9,4)
one thing worth noticing is (0,0) is actually added to the result
length(rdd)+1 times.
"scala> rdd.aggregate((1,1)) ((x, y) =>(x._1 + y, x._2 + 1), (x, y) => (x._1 + y._1, x._2 + y._2))
res1: (Int, Int) = (14,9)"