Spark dataframe aggregation scala - scala

val df = sc.parallelize(Seq((a, 1), (a, null), (b, null)(b, 2),(b, 3),(c, 2),(c, 4),(c, 3))).toDF("col1","col2")
The output should be like below.
col1 col2
a null
b null
c 4
I knew that groupBy on col1 and get the max of col2. which I can perform using df.groupBy("col1").agg("col2"->"max")
But my requirement is if null is there I want to select that record, but if null is not there I want to select max of col2.
How can I do this, can any please help me.

As I commented, your use of null makes things unnecessarily problematic, so if you can't work without null in the first place, I think it makes most sense to turn it into something more useful:
val df = sparkContext.parallelize(Seq((a, 1), (a, null), (b, null), (b, 2),(b, 3),(c, 2),(c, 4),(c, 3)))
.mapValues { v => Option(v) match {
case Some(i: Int) => i
case _ => Int.MaxValue
}
}.groupBy(_._1).map {
case (k, v) => k -> v.map(_._2).max
}
First, I use Option to get rid of null and to move things down the tree from Any to Int so I can enjoy more type safety. I replace null with MaxValue for reasons I'll explain shortly.
Then I groupBy as you did, but then I map over the groups to pair the keys with the max of the values, which will either be one of your original data items or MaxValue where the nulls once were. If you must, you can turn them back into null, but I wouldn't.
There might be a simpler way to do all this, but I like the null replacement with MaxValue, the pattern matching which helps me narrow the types, and the fact I can just treat everything the same afterwards.

Related

Scala Slick 3 - How to get non-matching results on joinLeft?

I would like to join two tables and get the rows from the first table that don't have a matching row in the second table for some condition of a certain column
for example:
tableA.joinLeft(tableB)
.on((a: A, b: B) => a.key === b.key && a.field1 =!= b.field1)
.filter(_._2.map(_.key).isEmpty)
.map(_._1)
but this checks that key==null in tableB instead of checking on the result of the join. What am I doing wrong?
Perhaps you need a full outer join, and then filter on result rows where the second table entry is None (NULL). For example:
tableA.fullJoin(tableB)
.on((a: A, b: B) => /* your join condition here */)
.filter { case (_, maybeMissing) => maybeMissing.isEmpty }
.map { case (first, _) => first }
I've found a solution by splitting it into 2 queries:
one query is:
tableA.join(tableB)
.on((a: A, b: B) => a.key === b.key)
.filter((a: A, b: B) => a.field1 =!= b.field1)
.map(_._1)
second query is:
tableA.filterNot(_.key in tableB.map(_.key))
And then "union" the two queries

Calculating a variable inside RDD after full outer join in Scala

What I want to do is simpl, but I struggle with Scala and RDDs.
The concept is this:
rdd1 rdd2
id count id count
a 2 a 1
b 1 c 5
d 3
And the result I am searching for is this:
rdd2
id count
a 3
b 1
c 5
d 3
what I intend to do is to perform a full outer join to get common and non common registers, identified by the id field. For now, rdd2, is empty.
rdd1 and rdd2 are:
RDD[(String, org.apache.spark.sql.Row)]
For now, I have the following code:
var rdd3 = rdd1.fullOuterJoin(rdd2).map {
case (id, left, right) =>
// TODO
}
How can I calculate that sum between RDDs?
If you are doing a fullOuterJoin you get the key and two Options passed into the closure (one Option represents the left side, the other one the right side). So the closure could look like this:
val result = rdd1.fullOuterJoin(rdd2).map {
case (id, (left, right)) =>
(id, left.getOrElse(0) + right.getOrElse(0))
}
This applies if your RDD is of type (String, Int).

Why external sorter works in spark

I made processing data code in scala&spark and somehow it's so slow. I guess it's because of 'ExternalSort'. As you can see my code below, There is no reason to sort data but spark did.
I have more than 6,000,000 rows in RDD and try to cluster data with column name 'ID' (which are less than 20 types, so each ID group would be more than 300,000 rows)
I know It's pretty large data but other process were not slow. Any idea of this?
val ListByID = allData.map { x => (x.getAs[String]("ID"), List(x)) }.reduceByKey { (a: List[Row], b: List[Row]) => List(a, b).flatten }
val goalData = ListByID.map({ rowList =>
val list = rowList._2
val ID = rowList._1
val SD = list.head.getAs[String]("SD")
val ANOTEHR_ID_CNT = list.map{ row=> row.getAs[String]("ANOTHER_ID")}.distinct.length
Row(
ID, ID, list.length,
list.count { row => row.getAs[Int]("FLAGA")==1 },
list.count { row => row.getAs[Int]("FLAGB")==1 },
SD, ANOTEHR_ID_CNT)
})
Following part:
allData.map{...}.reduceByKey{ (a: List[Row], b: List[Row]) => List(a, b).flatten }
is just a significantly more expensive implementation of groupByKey. It not only puts more pressure on GC by applying map-side aggregations but may also create huge number of temporary objects. If single group doesn't fit into memory then out-of-memory error is inevitable.
Next you group data and drag all the fields when all you do later is counting. It could be easily handled with simple aggregation.
Reduce by ID and ANOTHER_ID counting FLAGA=1, FLAGB=1 and keeping single SD
Reduce 1. by ID, sum FLAGA=1, FLAGB=1, 1 (distinct ANOTHER_ID), keep arbitrary SD.
Finally if you start with DataFrame why move data to less efficient format at all? With pseudocode:
df.groupBy("ID").agg(
count($"*"),
count(when($"FLAGA" === 1, 1)),
count(when($"FLAGB" === 1, 1))
countDistinct("ANOTHER_ID"),
first("SD")
)

Join two RDD in spark

I have two rdd one rdd have just one column other have two columns to join the two RDD on key's I have add dummy value which is 0 , is there any other efficient way of doing this using join ?
val lines = sc.textFile("ml-100k/u.data")
val movienamesfile = sc.textFile("Cml-100k/u.item")
val moviesid = lines.map(x => x.split("\t")).map(x => (x(1),0))
val test = moviesid.map(x => x._1)
val movienames = movienamesfile.map(x => x.split("\\|")).map(x => (x(0),x(1)))
val shit = movienames.join(moviesid).distinct()
Edit:
Let me convert this question in SQL. Say for example I have table1 (moveid) and table2 (movieid,moviename). In SQL we write something like:
select moviename, movieid, count(1)
from table2 inner join table table1 on table1.movieid=table2.moveid
group by ....
here in SQL table1 has only one column where as table2 has two columns still the join works, same way in Spark can join on keys from both the RDD's.
Join operation is defined only on PairwiseRDDs which are quite different from a relation / table in SQL. Each element of PairwiseRDD is a Tuple2 where the first element is the key and the second is value. Both can contain complex objects as long as key provides a meaningful hashCode
If you want to think about this in a SQL-ish you can consider key as everything that goes to ON clause and value contains selected columns.
SELECT table1.value, table2.value
FROM table1 JOIN table2 ON table1.key = table2.key
While these approaches look similar at first glance and you can express one using another there is one fundamental difference. When you look at the SQL table and you ignore constraints all columns belong in the same class of objects, while key and value in the PairwiseRDD have a clear meaning.
Going back to your problem to use join you need both key and value. Arguably much cleaner than using 0 as a placeholder would be to use null singleton but there is really no way around it.
For small data you can use filter in a similar way to broadcast join:
val moviesidBD = sc.broadcast(
lines.map(x => x.split("\t")).map(_.head).collect.toSet)
movienames.filter{case (id, _) => moviesidBD.value contains id}
but if you really want SQL-ish joins then you should simply use SparkSQL.
val movieIdsDf = lines
.map(x => x.split("\t"))
.map(a => Tuple1(a.head))
.toDF("id")
val movienamesDf = movienames.toDF("id", "name")
// Add optional join type qualifier
movienamesDf.join(movieIdsDf, movieIdsDf("id") <=> movienamesDf("id"))
On RDD Join operation is only defined for PairwiseRDDs, So need to change the value to pairedRDD. Below is a sample
val rdd1=sc.textFile("/data-001/part/")
val rdd_1=rdd1.map(x=>x.split('|')).map(x=>(x(0),x(1)))
val rdd2=sc.textFile("/data-001/partsupp/")
val rdd_2=rdd2.map(x=>x.split('|')).map(x=>(x(0),x(1)))
rdd_1.join(rdd_2).take(2).foreach(println)

How to sustrack values when keys are the same in pairRDDs?

I have two pairRDDs (Int, BreezeDenseMatrix[Double]) and what i want is, when the keys are the same to substrack their values.
E.g. when i have
RDD_1 : (1, BreezeMatrix_a)
RDD_2: (1, BreezeMatrix_b)
wanted result: (1, BreezeMatrix_a-BreezeMatrix_b)
I tried join but what is returned is (Int, (BreezeMatrix_a, BreezeMatrix_b)) and i don't know how the second part could be transformed. I can't understand if it is a set or an array, spark is not clear to that.
Any other ideas?
Let the result of the join be
joinresult = (Int, (BreezeMatrix_a, BreezeMatrix_b))
then give
actualresult = joinresult.map( a => (a._1,( a._2_1 - a._2_2)))