generic join between two dataframes spark/scala - scala

I have two dataframes i.e. left & right. I have working solution for my question. I need a way to make it generic. My question is at the end here.
leftDF:
+------+---------+-------+-------+
|leftId|leftAltId|leftCur|leftAmt|
+------+---------+-------+-------+
|1 |100 |USD |20 |
|2 |200 |INR |100 |
|4 |500 |MXN |100 |
+------+---------+-------+-------+
rightDF:
+-------+----------+--------+--------+
|rightId|rightAltId|rightCur|rightAmt|
+-------+----------+--------+--------+
|1 |300 |USD |20 |
|3 |400 |MXN |100 |
|4 |600 |MXN |200 |
+-------+----------+--------+--------+
I want to perform a join between these two dataframes and I expect four dataframes as output
transactions that exists in leftDF & not in rightDF
transactions that exists in rightDF & not in leftDF
transactions that have alteast one of the ids common between two dataframes
3.a Strict Match : same currency, amount between two dataframes. Example: transaction with id 1.
3.b Relaxed Match : transactions that have same Id but different currency/amount combo. Example transaction with id 4.
Here's the desired output:
transactions that exists in leftDF & not in rightDF
+------+---------+-------+-------+-------+----------+--------+--------+
|leftId|leftAltId|leftCur|leftAmt|rightId|rightAltId|rightCur|rightAmt|
+------+---------+-------+-------+-------+----------+--------+--------+
|2 |200 |INR |100 |null |null |null |null |
+------+---------+-------+-------+-------+----------+--------+--------+
transactions that exists in rightDF & not in leftDF
+------+---------+-------+-------+-------+----------+--------+--------+
|leftId|leftAtId|leftCur|leftAmt|rightId|rightAltId|rightCur|rightAmt|
+------+---------+-------+-------+-------+----------+--------+--------+
|null |null |null |null |3 |400 |MXN |100 |
+------+---------+-------+-------+-------+----------+--------+--------+
transactions that have alteast one of the ids common between two dataframes
+------+---------+-------+-------+-------+----------+--------+--------+
|leftId|leftAltId|leftCur|leftAmt|rightId|rightAltId|rightCur|rightAmt|
+------+---------+-------+-------+-------+----------+--------+--------+
|1 |100 |USD |20 |1 |300 |USD |20 |
|4 |500 |MXN |100 |4 |600 |MXN |200 |
+------+---------+-------+-------+-------+----------+--------+--------+
3.a Strict Match : same currency, amount between two dataframes. Example: transaction with id 1.
+------+---------+-------+-------+-------+----------+--------+--------+
|leftId|leftAltId|leftCur|leftAmt|rightId|rightAltId|rightCur|rightAmt|
+------+---------+-------+-------+-------+----------+--------+--------+
|1 |100 |USD |20 |1 |300 |USD |20 |
+------+---------+-------+-------+-------+----------+--------+--------+
3.b Relaxed Match : transactions that have same Id but different currency/amount combo. Example transaction with id 4.
+------+---------+-------+-------+-------+----------+--------+--------+
|leftId|leftAltId|leftCur|leftAmt|rightId|rightAltId|rightCur|rightAmt|
+------+---------+-------+-------+-------+----------+--------+--------+
|4 |500 |MXN |100 |4 |600 |MXN |200 |
+------+---------+-------+-------+-------+----------+--------+--------+
Here's the working code I have for it:
import sparkSession.implicits._
val leftDF: DataFrame = Seq((1, 100, "USD", 20), (2, 200, "INR", 100), (4, 500, "MXN", 100)).toDF("leftId", "leftAltId", "leftCur", "leftAmt")
val rightDF: DataFrame = Seq((1, 300, "USD", 20), (3, 400, "MXN", 100), (4, 600, "MXN", 200)).toDF("rightId", "rightAltId", "rightCur", "rightAmt")
leftDF.show(false)
rightDF.show(false)
val idMatchQuery = leftDF("leftId") === rightDF("rightId") || leftDF("leftAltId") === rightDF("rightAltId")
val currencyMatchQuery = leftDF("leftCur") === rightDF("rightCur") && leftDF("leftAmt") === rightDF("rightAmt")
val leftOnlyQuery = (col("leftId").isNotNull && col("rightId").isNull) || (col("leftAltId").isNotNull && col("rightAltId").isNull)
val rightOnlyQuery = (col("rightId").isNotNull && col("leftId").isNull) || (col("rightAltId").isNotNull && col("leftAltId").isNull)
val matchQuery = (col("rightId").isNotNull && col("leftId").isNotNull) || (col("rightAltId").isNotNull && col("leftAltId").isNotNull)
val result = leftDF.join(rightDF, idMatchQuery, "fullouter")
val leftOnlyDF = result.filter(leftOnlyQuery)
val rightOnlyDF = result.filter(rightOnlyQuery)
val matchDF = result.filter(matchQuery)
val strictMatchDF = matchDF.filter(currencyMatchQuery.equalTo(true))
val relaxedMatchDF = matchDF.filter(currencyMatchQuery.equalTo(false))
leftOnlyDF.show(false)
rightOnlyDF.show(false)
matchDF.show(false)
strictMatchDF.show(false)
relaxedMatchDF.show(false)
What I'm looking for:
I want to be able to take the column names to join on, as a list and make the code generic.
for e.g.
val relaxedJoinList = Array(("leftId", "rightId"), ("leftAltId", "rightAltId"))
val strictJoinList = Array(("leftCur", "rightCur"), ("leftAmt", "rightAmt"))

I want to be able to take the column names to join on, as a list and make the code generic.
This is not a perfect suggestion but would definitely help you get generalized. The suggestion is to go with foldLeft
val relaxedJoinList = Array(("leftId", "rightId"), ("leftAltId", "rightAltId"))
val rHead = relaxedJoinList.head
val strictJoinList = Array(("leftCur", "rightCur"), ("leftAmt", "rightAmt"))
val sHead = strictJoinList.head
val idMatchQuery = relaxedJoinList.tail.foldLeft(leftDF(rHead._1) === rightDF(rHead._2)){(x, y) => x || leftDF(y._1) === rightDF(y._2)}
val currencyMatchQuery = strictJoinList.tail.foldLeft(leftDF(sHead._1) === rightDF(sHead._2)){(x, y) => x && leftDF(y._1) === rightDF(y._2)}
val leftOnlyQuery = relaxedJoinList.tail.foldLeft(col(rHead._1).isNotNull && col(rHead._2).isNull){(x, y) => x || col(y._1).isNotNull && col(y._2).isNull}
val rightOnlyQuery = relaxedJoinList.tail.foldLeft(col(rHead._1).isNull && col(rHead._2).isNotNull){(x, y) => x || col(y._1).isNull && col(y._2).isNotNull}
val matchQuery = relaxedJoinList.tail.foldLeft(col(rHead._1).isNotNull && col(rHead._2).isNotNull){(x, y) => x || col(y._1).isNotNull && col(y._2).isNotNull}
The rest of the code is as yours
I hope the answer is helpful

Related

Scala -- apply a custom if-then on a dataframe

I have this kind of dataset:
val cols = Seq("col_1","col_2")
val data = List(("a",1),
("b",1),
("a",2),
("c",3),
("a",3))
val df = spark.createDataFrame(data).toDF(cols:_*)
+-----+-----+
|col_1|col_2|
+-----+-----+
|a |1 |
|b |1 |
|a |2 |
|c |3 |
|a |3 |
+-----+-----+
I want to add an if-then column based on the existing columns.
df
.withColumn("col_new",
when(col("col_2").isin(2, 5), "str_1")
.when(col("col_2").isin(4, 6), "str_2")
.when(col("col_2").isin(1) && col("col_1").contains("a"), "str_3")
.when(col("col_2").isin(3) && col("col_1").contains("b"), "str_1")
.when(col("col_2").isin(1,2,3), "str_4")
.otherwise(lit("other")))
Instead of the list of when-then statements, I would prefer to apply a custom function. In Python I would run a lambda & map.
thank you!

Align multiple dataframes in pyspark

I have these 4 spark dataframes:
order,device,count_1
101,201,2
102,202,4
order,device,count_2
101,201,10
103,203,100
order,device,count_3
104,204,111
103,203,10
order,device,count_4
101,201,4
104,204,11
I want to create a resultant dataframe as:
order,device,count_1,count_2,count_3,count_4
101,201,2,10,,4,
102,202,4,,,,
103,203,,100,10,,
104,204,,,111,11
Is this a case of UNION or JOIN or APPEND? How to get the final resultant df?
You can think of UNION as combining tables by rows, so the number of rows will likely increase. JOIN combines tables by columns. I'm not sure what you mean by APPEND, but in this case, you would want JOIN.
Try:
val df1 = Seq((101,201,2), (102,202,4)).toDF("order" ,"device", "count_1")
val df2 = Seq((101,201,10), (103,203,100)).toDF("order" ,"device", "count_2")
val df3 = Seq((104,204,111), (103,203,10)).toDF("order" ,"device", "count_3")
val df4 = Seq((101,201,4), (104,204,11)).toDF("order" ,"device", "count_4")
val df12 = df1.join(df2, Seq("order", "device"),"fullouter")
df12.show(false)
val df123 = df12.join(df3, Seq("order", "device"),"fullouter")
df123.show(false)
val df1234 = df123.join(df4, Seq("order", "device"),"fullouter")
df1234.show(false)
returns:
+-----+------+-------+-------+-------+-------+
|order|device|count_1|count_2|count_3|count_4|
+-----+------+-------+-------+-------+-------+
|101 |201 |2 |10 |null |4 |
|102 |202 |4 |null |null |null |
|103 |203 |null |100 |10 |null |
|104 |204 |null |null |111 |11 |
+-----+------+-------+-------+-------+-------+
As you can see the comments are flawed and the 1st answer incorrect.
Did in Scala, should be easy to do in pyspark.

Spark scala join RDD between 2 datasets

Supposed i have two dataset as following:
Dataset 1:
id, name, score
1, Bill, 200
2, Bew, 23
3, Amy, 44
4, Ramond, 68
Dataset 2:
id,message
1, i love Bill
2, i hate Bill
3, Bew go go !
4, Amy is the best
5, Ramond is the wrost
6, Bill go go
7, Bill i love ya
8, Ramond is Bad
9, Amy is great
I wanted to join above two datasets and counting the top number of person's name that appears in dataset2 according to the name in dataset1 the result should be:
Bill, 4
Ramond, 2
..
..
I managed to join both of them together but not sure how to count how many time it appear for each person.
Any suggestion would be appreciated.
Edited:
my join code:
val rdd = sc.textFile("dataset1")
val rdd2 = sc.textFile("dataset2")
val rddPair1 = rdd.map { x =>
var data = x.split(",")
new Tuple2(data(0), data(1))
}
val rddPair2 = rdd2.map { x =>
var data = x.split(",")
new Tuple2(data(0), data(1))
}
rddPair1.join(rddPair2).collect().foreach(f =>{
println(f._1+" "+f._2._1+" "+f._2._2)
})
Using RDDs, achieving the solution you desire, would be complex. Not so much using dataframes.
First step would be to read the two files you have into dataframes as below
val df1 = sqlContext.read.format("com.databricks.spark.csv")
.option("header", true)
.load("dataset1")
val df2 = sqlContext.read.format("com.databricks.spark.csv")
.option("header", true)
.load("dataset1")
so that you should be having
df1
+---+------+-----+
|id |name |score|
+---+------+-----+
|1 |Bill |200 |
|2 |Bew |23 |
|3 |Amy |44 |
|4 |Ramond|68 |
+---+------+-----+
df2
+---+-------------------+
|id |message |
+---+-------------------+
|1 |i love Bill |
|2 |i hate Bill |
|3 |Bew go go ! |
|4 |Amy is the best |
|5 |Ramond is the wrost|
|6 |Bill go go |
|7 |Bill i love ya |
|8 |Ramond is Bad |
|9 |Amy is great |
+---+-------------------+
join, groupBy and count should give your desired output as
df1.join(df2, df2("message").contains(df1("name")), "left").groupBy("name").count().as("count").show(false)
Final output would be
+------+-----+
|name |count|
+------+-----+
|Ramond|2 |
|Bill |4 |
|Amy |2 |
|Bew |1 |
+------+-----+

append two dataframes and update data

Hello guys I want to update an old dataframe based on pos_id and article_id field.
If the tuple (pos_id,article_id) exist , I will add each column to the old one, if it doesn't exist I will add the new one. It worked fine. But I don't know how to deal with the case , when the dataframe is intially empty , in this case , I will add the new rows in the second dataframe to the old one. Here it is what I did
val histocaisse = spark.read
.format("csv")
.option("header", "true") //reading the headers
.load("C:/Users/MHT/Desktop/histocaisse_dte1.csv")
val hist = histocaisse
.withColumn("pos_id", 'pos_id.cast(LongType))
.withColumn("article_id", 'pos_id.cast(LongType))
.withColumn("date", 'date.cast(DateType))
.withColumn("qte", 'qte.cast(DoubleType))
.withColumn("ca", 'ca.cast(DoubleType))
val histocaisse2 = spark.read
.format("csv")
.option("header", "true") //reading the headers
.load("C:/Users/MHT/Desktop/histocaisse_dte2.csv")
val hist2 = histocaisse2.withColumn("pos_id", 'pos_id.cast(LongType))
.withColumn("article_id", 'pos_id.cast(LongType))
.withColumn("date", 'date.cast(DateType))
.withColumn("qte", 'qte.cast(DoubleType))
.withColumn("ca", 'ca.cast(DoubleType))
hist2.show(false)
+------+----------+----------+----+----+
|pos_id|article_id|date |qte |ca |
+------+----------+----------+----+----+
|1 |1 |2000-01-07|2.5 |3.5 |
|2 |2 |2000-01-07|14.7|12.0|
|3 |3 |2000-01-07|3.5 |1.2 |
+------+----------+----------+----+----+
+------+----------+----------+----+----+
|pos_id|article_id|date |qte |ca |
+------+----------+----------+----+----+
|1 |1 |2000-01-08|2.5 |3.5 |
|2 |2 |2000-01-08|14.7|12.0|
|3 |3 |2000-01-08|3.5 |1.2 |
|4 |4 |2000-01-08|3.5 |1.2 |
|5 |5 |2000-01-08|14.5|1.2 |
|6 |6 |2000-01-08|2.0 |1.25|
+------+----------+----------+----+----+
+------+----------+----------+----+----+
|pos_id|article_id|date |qte |ca |
+------+----------+----------+----+----+
|1 |1 |2000-01-08|5.0 |7.0 |
|2 |2 |2000-01-08|39.4|24.0|
|3 |3 |2000-01-08|7.0 |2.4 |
|4 |4 |2000-01-08|3.5 |1.2 |
|5 |5 |2000-01-08|14.5|1.2 |
|6 |6 |2000-01-08|2.0 |1.25|
+------+----------+----------+----+----+
Here is the solution , i found
val df = hist2.join(hist1, Seq("article_id", "pos_id"), "left")
.select($"pos_id", $"article_id",
coalesce(hist2("date"), hist1("date")).alias("date"),
(coalesce(hist2("qte"), lit(0)) + coalesce(hist1("qte"), lit(0))).alias("qte"),
(coalesce(hist2("ca"), lit(0)) + coalesce(hist1("ca"), lit(0))).alias("ca"))
.orderBy("pos_id", "article_id")
This case doesn't work when hist1 is empty .Any help please ?
Thanks a lot
Not sure if I understood correctly, but if the problem is sometimes the second dataframe is empty, and that makes the join crash, something you can try is this:
val checkHist1Empty = Try(hist1.first)
val df = checkHist1Empty match {
case Success(df) => {
hist2.join(hist1, Seq("article_id", "pos_id"), "left")
.select($"pos_id", $"article_id",
coalesce(hist2("date"), hist1("date")).alias("date"),
(coalesce(hist2("qte"), lit(0)) + coalesce(hist1("qte"), lit(0))).alias("qte"),
(coalesce(hist2("ca"), lit(0)) + coalesce(hist1("ca"), lit(0))).alias("ca"))
.orderBy("pos_id", "article_id")
}
case Failure(e) => {
hist2.select($"pos_id", $"article_id",
coalesce(hist2("date")).alias("date"),
coalesce(hist2("qte"), lit(0)).alias("qte"),
coalesce(hist2("ca"), lit(0)).alias("ca"))
.orderBy("pos_id", "article_id")
}
}
This basically checks if the hist1 is empty before performing the join. In case it is empty it generates the df based on the same logic but applied only to the hist2 dataframe. In case it contains information it applies the logic you had, which you said that works.
instead of doing a join, why don't you do a union of the two dataframes and then groupBy (pos_id,article_id) and apply udf to each column sum for qte and ca.
val df3 = df1.unionAll(df2)
val df4 = df3.groupBy("pos_id", "article_id").agg($"pos_id", $"article_id", max("date"), sum("qte"), sum("ca"))

Spark - named_struct for empty Map

I use Spark 2.0.1 Scala 2.11, and this question is related to this
Below is the setup:
val ss = new StructType().add("x", IntegerType).add("y", MapType(DoubleType, IntegerType))
val s = new StructType()
.add("a", IntegerType)
.add("b", ss)
val d = Seq(Row(1, Row(1,Map(1.0->1, 2.0->2))),
Row(2, Row(2,Map(2.0->2, 3.0->3))),
Row(3, null ),
Row(4, Row(4, Map())))
val rd = sc.parallelize(d)
val df = spark.createDataFrame(rd, s)
df.select($"a", $"b").show(false)
// +---+---------------------------+
// |a |b |
// +---+---------------------------+
// |1 |[1,Map(1.0 -> 1, 2.0 -> 2)]|
// |2 |[2,Map(2.0 -> 2, 3.0 -> 3)]|
// |3 |null |
// |4 |[4,Map()] |
// +---+---------------------------+
//
The below statement works when I have to provide a default to coalesce (row 2 col 3 cell has the default value):
df.groupBy($"a").pivot("a").
agg(expr("first(coalesce(b, named_struct('x', cast(null as Int), 'y', Map(0.0D, 0) )))" ) )
.show(false)
// +---+---------------------------+---------------------------+--------------------+---------+
// |a |1 |2 |3 |4 |
// +---+---------------------------+---------------------------+--------------------+---------+
// |1 |[1,Map(1.0 -> 1, 2.0 -> 2)]|null |null |null |
// |3 |null |null |[null,Map(0.0 -> 0)]|null |
// |4 |null |null |null |[4,Map()]|
// |2 |null |[2,Map(2.0 -> 2, 3.0 -> 3)]|null |null |
// +---+---------------------------+---------------------------+--------------------+---------+
But how to create an empty Map() (like what's seen in a=4) using named_struct or otherwise?
You can achieve this with an case class and an UDF:
case class MyStruct(x:Option[Int], y:Map[Double,Int])
import org.apache.spark.sql.functions.{udf, first,coalesce}
val emptyStruct = udf(() => MyStruct(None,Map.empty[Double,Int]))
df.groupBy($"a").pivot("a")
.agg(first(coalesce($"b",emptyStruct())))
.show(false)