recursive arithmetical operations with scala - scala

I have a scala List object with a recursive definition of all operations I have to do with columns of a spark dataframe.
For example, the operations
(C1 - C2) + ( (C3 - C4)- (C5 -C6) )
are defined by the next scala List:
List("addition", List("substraction",List("C1","C2")),
List("substraction",
List("substraction",List("C3","C4")),
List("substraction"), List("C5","C6"))
)
where "C1",...,"C5" are the names of the spark dataframes columns.
I would like to define a recursive scala function that gives me the final column result.
Does anyone know a way to do it?

The way you define the operation is quite strange. You encapsulate column name operands in a list, but not complex operands. Therefore your lists can either have 2 or three elements. How would you define something like (A + (B-C))? I would start by fixing that and write your operation either like this (3 elements per list):
val list = List("addition",
List("substraction","C1","C2"),
List("substraction",
List("substraction","C3","C4"),
List("substraction", "C5","C6")
)
)
or like this (2 elements per list):
val list = List("addition", List(
List("substraction", List("C1","C2")),
List("substraction", List(
List("substraction", List("C3","C4")),
List("substraction", List("C5","C6"))
)))
)
The second version being much more verbose, let's pick the first one and write the recursive function:
def operation_to_col(operation : Any) : Column = {
operation match {
case x : String => col(x)
case List("addition", s1 : Any, s2 : Any) =>
operation_to_col(s1) + operation_to_col(s2)
case List("substraction", s1 : Any, s2 : Any) =>
operation_to_col(s1) + operation_to_col(s2)
}
}

First, I am going to change the definition of the operations. For example, the operations
(C1 - C2) + ( (C3 - C4)- (C5 -C6) )
are defined by the next scala List:
val list = List("addition",
List("substraction","C1","C2"),
List("substraction",
List("substraction","C3","C4"),
List("substraction", "C5","C6")
)
)
I am going to create a dataframe for an example:
val data = Seq((1000, 1, 2,3,4,5), (2000,1,2,3,4,5), (3000,1,2,3,4,5))
val rdd = spark.sparkContext.parallelize(data)
val df = rdd.toDF("C1","C2","C3","C4","C5","C6")
The List of permitted operations is:
val operations=List("addition","subtraction","multiplication","division")
I created the next Map object to associate the operations and their symbols:
val oprSimbols:Map[String,String] = Map("addition"->"+", "substraction"-> "-", "multiplication"->"*","division"->"/")
Finally, I define the function that solves the problem:
def operation_to_col(df: DataFrame,oprSimbols: Map[String,String],
operations:List[String], list : Any) : DataFrame = {
list match {
case x if operations.contains(x.toString) => df.select(col(x.toString))
case List(oprName:String,x:String, y:String) =>{
val sym = oprSimbols(oprName)
val exprOpr = List(x,sym,y).mkString(" ")
df.selectExpr(exprOpr)}
case List(oprName:String, s1 : Any, s2 : Any) =>{
val df1 = operation_to_col(df,oprSimbols,operations,s1)
val df2 = operation_to_col(df,oprSimbols,operations,s2)
val sym = oprSimbols(oprName)
val exprOpr = List(df1.columns(0),sym,df2.columns(0)).mkString(" ")
df.selectExpr(exprOpr)
}
}
}
We can check it:
operation_to_col(df,oprSimbols, operations, list )

Related

Spark - aggregateByKey Type mismatch error

I am trying find the problem behind this. I am trying to find the maximum number Marks of each student using aggregateByKey.
val data = spark.sc.Seq(("R1","M",22),("R1","E",25),("R1","F",29),
("R2","M",20),("R2","E",32),("R2","F",52))
.toDF("Name","Subject","Marks")
def seqOp = (acc:Int,ele:(String,Int)) => if (acc>ele._2) acc else ele._2
def combOp =(acc:Int,acc1:Int) => if(acc>acc1) acc else acc1
val r = data.rdd.map{case(t1,t2,t3)=> (t1,(t2,t3))}.aggregateByKey(0)(seqOp,combOp)
I am getting error that aggregateByKey accepts (Int,(Any,Any)) but actual is (Int,(String,Int)).
Your map function is incorrect since you have a Row as input, not a Tuple3
Fix the last line with :
val r = data.rdd.map { r =>
val t1 = r.getAs[String](0)
val t2 = r.getAs[String](1)
val t3 = r.getAs[Int](2)
(t1,(t2,t3))
}.aggregateByKey(0)(seqOp,combOp)

Spark - calculate max ocurrence per day-event

I have the following RDD[String]:
1:AAAAABAAAAABAAAAABAAABBB
2:BBAAAAAAAAAABBAAAAAAAAAA
3:BBBBBBBBAAAABBAAAAAAAAAA
The first number is supposed to be days and the following characters are events.
I have to calculate the day where each event has the maximum occurrence.
The expected result for this dataset should be:
{ "A" -> Day2 , "B" -> Day3 }
(A has repeated 10 times in day2 and b 10 times in day3)
I am splitting the original dataset
val foo = rdd.map(_.split(":")).map(x => (x(0), x(1).split("")) )
What could be the best implementation for count and aggregation?
Any help is appreciated.
This should do the trick:
import org.apache.spark.sql.functions._
val rdd = sqlContext.sparkContext.makeRDD(Seq(
"1:AAAAABAAAAABAAAAABAAABBB",
"2:BBAAAAAAAAAABBAAAAAAAAAA",
"3:BBBBBBBBAAAABBAAAAAAAAAA"
))
val keys = Seq("A", "B")
val seqOfMaps: RDD[(String, Map[String, Int])] = rdd.map{str =>
val split = str.split(":")
(s"Day${split.head}", split(1).groupBy(a => a.toString).mapValues(_.length))
}
keys.map{key => {
key -> seqOfMaps.mapValues(_.get(key).get).sortBy(a => -a._2).first._1
}}.toMap
The processing that need to be done consist in transforming the data into a rdd that is easy to apply on functions like: find the maximum for a list
I will try to explain step by step
I've used dummy data for "A" and "B" chars.
The foo rdd is the first step it will give you RDD[(String, Array[String])]
Let's extract each char for the Array[String]
val res3 = foo.map{case (d,s)=> (d, s.toList.groupBy(c => c).map{case (x, xs) => (x, xs.size)}.toList)}
(1,List((A,18), (B,6)))
(2,List((A,20), (B,4)))
(3,List((A,14), (B,10)))
Next we will flatMap over values to expand our rdd by char
res3.flatMapValues(list => list)
(3,(A,14))
(3,(B,10))
(1,(A,18))
(2,(A,20))
(2,(B,4))
(1,(B,6))
Rearrange the rdd in order to look better
res5.map{case (d, (s, c)) => (s, c, d)}
(A,20,2)
(B,4,2)
(A,18,1)
(B,6,1)
(A,14,3)
(B,10,3)
Now we are groupy by char
res7.groupBy(_._1)
(A,CompactBuffer((A,18,1), (A,20,2), (A,14,3)))
(B,CompactBuffer((B,6,1), (B,4,2), (B,10,3)))
Finally we are taking the maxium count for each row
res9.map{case (s, list) => (s, list.maxBy(_._2))}
(B,(B,10,3))
(A,(A,20,2))
Hope this help
Previous answers are good, but I prefer such solution:
val data = Seq(
"1:AAAAABAAAAABAAAAABAAABBB",
"2:BBAAAAAAAAAABBAAAAAAAAAA",
"3:BBBBBBBBAAAABBAAAAAAAAAA"
)
val initialRDD = sparkContext.parallelize(data)
// to tuples like (1,'A',18)
val charCountRDD = initialRDD.flatMap(s => {
val parts = s.split(":")
val charCount = parts(1).groupBy(i => i).mapValues(_.length)
charCount.map(i => (parts(0), i._1, i._2))
})
// group by character, and take max value from grouped collection
val result = charCountRDD.groupBy(i => i._2).map(k => k._2.maxBy(z => z._3))
result.foreach(println(_))
Result is:
(3,B,10)
(2,A,20)

Finding values within broadcast variable

I want to join two sets by applying broadcast variable. I am trying to implement the first suggestion from Spark: what's the best strategy for joining a 2-tuple-key RDD with single-key RDD?
val emp_newBC = sc.broadcast(emp_new.collectAsMap())
val joined = emp.mapPartitions({ iter =>
val m = emp_newBC.value
for {
((t, w)) <- iter
if m.contains(t)
} yield ((w + '-' + m.get(t).get),1)
}, preservesPartitioning = true)
However as mentioned here: broadcast variable fails to take all data I need to use collect() rather than collectAsMAp(). I tried to adjust my code as below:
val emp_newBC = sc.broadcast(emp_new.collect())
val joined = emp.mapPartitions({ iter =>
val m = emp_newBC.value
for {
((t, w)) <- iter
if m.contains(t)
amk = m.indexOf(t)
} yield ((w + '-' + emp_newBC.value(amk)),1) //yield ((t, w), (m.get(t).get)) //((w + '-' + m.get(t).get),1)
}, preservesPartitioning = true)
But it seems m.contains(t) does not respond. How can I remedy this?
Thanks in advance.
How about something like this?
val emp_newBC = sc.broadcast(emp_new.groupByKey.collectAsMap)
val joined = emp.mapPartitions(iter => for {
(k, v1) <- iter
v2 <- emp_newBC.value.getOrElse(k, Iterable())
} yield (s"$v1-$v2", 1))
Regarding your code... As far as I understand emp_new is a RDD[(String, String)]. When it is collected you get an Array[(String, String)]. When you use
((t, w)) <- iter
t is a String so m.contains(t) will always return false.
Another problem I see is preservesPartitioning = true inside mapPartitions. There a few possible scenarios:
emp is partitioned and you want joined to be partitioned as well. Since you change key from t to some new value partitioning cannot be preserved and resulting RDD has to be repartitioned. If you use preservesPartitioning = true output RDD will end up with wrong partitions.
emp is partitioned but you don't need partitioning for joined. There is no reason to use preservesPartitioning.
emp is not partitioned. Setting preservesPartitioning has no effect.

Does a filter function exist which stops when it finds the n'th first element corresponding to a predicate

I ask this question because i had to find one specific element on a RDD[key:Int,Array(Double)] where keys are unique. So it will be costly to use filter on the entire RDD whereas i just need one element which a know the key.
val wantedkey = 94
val res = rdd.filter( x => x._1 == wantedkey )
Thank you for your advices
Look the lookup function at PairRDDFunctions.scala.
def lookup(key: K): Seq[V]
Return the list of values in the RDD for key key. This operation is
done efficiently if the RDD has a known partitioner by only searching
the partition that the key maps to.
Example
val a = sc.parallelize(List("dog", "tiger", "lion", "cat", "panther", "eagle"), 2)
val b = a.keyBy(x => (_.length)
b.lookup(5)
res0: Seq[String] = WrappedArray(tiger, eagle)
All transformations are lazy and they are computed only when you call action on them. So you can just write:
val wantedkey = 94
val res = rdd.filter( x => x._1 == wantedkey ).first()

Summing items within a Tuple

Below is a data structure of List of tuples, ot type List[(String, String, Int)]
val data3 = (List( ("id1" , "a", 1), ("id1" , "a", 1), ("id1" , "a", 1) , ("id2" , "a", 1)) )
//> data3 : List[(String, String, Int)] = List((id1,a,1), (id1,a,1), (id1,a,1),
//| (id2,a,1))
I'm attempting to count the occurences of each Int value associated with each id. So above data structure should be converted to List((id1,a,3) , (id2,a,1))
This is what I have come up with but I'm unsure how to group similar items within a Tuple :
data3.map( { case (id,name,num) => (id , name , num + 1)})
//> res0: List[(String, String, Int)] = List((id1,a,2), (id1,a,2), (id1,a,2), (i
//| d2,a,2))
In practice data3 is of type spark obj RDD , I'm using a List in this example for testing but same solution should be compatible with an RDD . I'm using a List for local testing purposes.
Update : based on following code provided by maasg :
val byKey = rdd.map({case (id1,id2,v) => (id1,id2)->v})
val byKeyGrouped = byKey.groupByKey
val result = byKeyGrouped.map{case ((id1,id2),values) => (id1,id2,values.sum)}
I needed to amend slightly to get into format I expect which is of type
.RDD[(String, Seq[(String, Int)])]
which corresponds to .RDD[(id, Seq[(name, count-of-names)])]
:
val byKey = rdd.map({case (id1,id2,v) => (id1,id2)->v})
val byKeyGrouped = byKey.groupByKey
val result = byKeyGrouped.map{case ((id1,id2),values) => ((id1),(id2,values.sum))}
val counted = result.groupedByKey
In Spark, you would do something like this: (using Spark Shell to illustrate)
val l = List( ("id1" , "a", 1), ("id1" , "a", 1), ("id1" , "a", 1) , ("id2" , "a", 1))
val rdd = sc.parallelize(l)
val grouped = rdd.groupBy{case (id1,id2,v) => (id1,id2)}
val result = grouped.map{case ((id1,id2),values) => (id1,id2,value.foldLeft(0){case (cumm, tuple) => cumm + tuple._3})}
Another option would be to map the rdd into a PairRDD and use groupByKey:
val byKey = rdd.map({case (id1,id2,v) => (id1,id2)->v})
val byKeyGrouped = byKey.groupByKey
val result = byKeyGrouped.map{case ((id1,id2),values) => (id1,id2,values.sum)}
Option 2 is a slightly better option when handling large sets as it does not replicate the id's in the cummulated value.
This seems to work when I use scala-ide:
data3
.groupBy(tupl => (tupl._1, tupl._2))
.mapValues(v =>(v.head._1,v.head._2, v.map(_._3).sum))
.values.toList
And the result is the same as required by the question
res0: List[(String, String, Int)] = List((id1,a,3), (id2,a,1))
You should look into List.groupBy.
You can use the id as the key, and then use the length of your values in the map (ie all the items sharing the same id) to know the count.
#vptheron has the right idea.
As can be seen in the docs
def groupBy[K](f: (A) ⇒ K): Map[K, List[A]]
Partitions this list into a map of lists according to some discriminator function.
Note: this method is not re-implemented by views. This means when applied to a view it will >always force the view and return a new list.
K the type of keys returned by the discriminator function.
f the discriminator function.
returns
A map from keys to lists such that the following invariant holds:
(xs partition f)(k) = xs filter (x => f(x) == k)
That is, every key k is bound to a list of those elements x for which f(x) equals k.
So something like the below function, when used with groupBy will give you a list with keys being the ids.
(Sorry, I don't have access to an Scala compiler, so I can't test)
def f(tupule: A) :String = {
return tupule._1
}
Then you will have to iterate through the List for each id in the Map and sum up the number of integer occurrences. That is straightforward, but if you still need help, ask in the comments.
The following is the most readable, efficient and scalable
data.map {
case (key1, key2, value) => ((key1, key2), value)
}
.reduceByKey(_ + _)
which will give a RDD[(String, String, Int)]. By using reduceByKey it means the summation will paralellize, i.e. for very large groups it will be distributed and summation will happen on the map side. Think about the case where there are only 10 groups but billions of records, using .sum won't scale as it will only be able to distribute to 10 cores.
A few more notes about the other answers:
Using head here is unnecessary: .mapValues(v =>(v.head._1,v.head._2, v.map(_._3).sum)) can just use .mapValues(v =>(v_1, v._2, v.map(_._3).sum))
Using a foldLeft here is really horrible when the above shows .map(_._3).sum will do: val result = grouped.map{case ((id1,id2),values) => (id1,id2,value.foldLeft(0){case (cumm, tuple) => cumm + tuple._3})}