Iterating on columns in dataframe - scala

I have the following data frames
df1
+----------+----+----+----+-----+
| WEEK|DIM1|DIM2| T1| T2|
+----------+----+----+----+-----+
|2016-04-02| 14|NULL|9874| 880|
|2016-04-30| 14| FR|9875| 13|
|2017-06-10| 15| PQR|9867|57721|
+----------+----+----+----+-----+
df2
+----------+----+----+----+-----+
| WEEK|DIM1|DIM2| T1| T2|
+----------+----+----+----+-----+
|2016-04-02| 14|NULL|9879| 820|
|2016-04-30| 14| FR|9785| 9|
|2017-06-10| 15| XYZ|9967|57771|
+----------+----+----+----+-----+
I need to produce my output as following -
+----------+----+----+----+-----+----+-----+-------+-------+----------+------------+
| WEEK|DIM1|DIM2| T1| T2| T1| T2|t1_diff|t2_diff|pr_primary|pr_reference|
+----------+----+----+----+-----+----+-----+-------+-------+----------+------------+
|2016-04-02| 14|NULL|9874| 880|9879| 820| -5| 60| Y| Y|
|2017-06-10| 15| PQR|9867|57721|null| null| null| null| Y| N|
|2017-06-10| 15| XYZ|null| null|9967|57771| null| null| N| Y|
|2016-04-30| 14| FR|9875| 13|9785| 9| 90| 4| Y| Y|
+----------+----+----+----+-----+----+-----+-------+-------+----------+------------+
Here, t1_diff is difference between left T1 and right T1, t2_diff is difference between left T2 and right T2, pr_primary is Y if row is present in df1 and not in df2 and similarly for pr_reference.
I have generated the above with following piece of code
val df1 = Seq(
("2016-04-02", "14", "NULL", 9874, 880), ("2016-04-30", "14", "FR", 9875, 13), ("2017-06-10", "15", "PQR", 9867, 57721)
).toDF("WEEK", "DIM1", "DIM2","T1","T2")
val df2 = Seq(
("2016-04-02", "14", "NULL", 9879, 820), ("2016-04-30", "14", "FR", 9785, 9), ("2017-06-10", "15", "XYZ", 9967, 57771)
).toDF("WEEK", "DIM1", "DIM2","T1","T2")
import org.apache.spark.sql.functions._
val joined = df1.as("l").join(df2.as("r"), Seq("WEEK", "DIM1", "DIM2"), "fullouter")
val j1 = joined.withColumn("t1_diff",col(s"l.T1") - col(s"r.T1")).withColumn("t2_diff",col(s"l.T2") - col(s"r.T2"))
val isPresentSubstitution = udf( (x: String, y: String) => if (x == null && y == null) "N" else "Y")
j1.withColumn("pr_primary",isPresentSubstitution(col(s"l.T1"), col(s"l.T2"))).withColumn("pr_reference",isPresentSubstitution(col(s"r.T1"), col(s"r.T2"))).show
I want to make it generalize for any number of columns not just T1 and T2. Can someone suggest me a better way to do this ? I am running this in spark.

To be able to set any number of columns like t1_diff with any expresion calculating their values, we need to make some refactoring allowing to use withColumn in a more generic manner.
First, we need to collect the target values: the names of the target columns and the expressions that calculate their contents. This can be done with a sequence of Tuples:
val diffColumns = Seq(
("t1_diff", col("l.T1") - col("r.T1")),
("t2_diff", col("l.T2") - col("r.T2"))
)
// or, to make it more readable, create a dedicated "case class DiffColumn(colName: String, expression: Column)"
Now we can use folding to produce the joined DataFrame from joined and the sequence above:
val joinedWithDiffCols =
diffColumns.foldLeft(joined) { case(df, diffTuple) =>
df.withColumn(diffTuple._1, diffTuple._2)
}
joinedWithDiffCols contains the same data as j1 from the question.
To append new columns, you now have to modify diffColumns sequence only. You can even put the calculation of pr_primary and pr_reference in this sequence (but rename the ref to appendedColumns in this case, to be more precise).
Update
To facilitate the creation of the tuples for diffCollumns, it also can be generalized, for example:
// when both column names are same:
def generateDiff(column: String): (String, Column) = generateDiff(column, column)
// when left and right column names are different:
def generateDiff(leftCol: String, rightCol: String): (String, Column) =
(s"${leftCol}_diff", col("l." + leftCol) - col("r." + rightCol))
val diffColumns = Seq("T1", "T2").map(generateDiff)
End-of-update

Assuming the columns are named same in both df1 and df2, you can do something like:
val diffCols = df1.columns
.filter(_.matches("T\\d+"))
.map(c => col(s"l.$c") - col(s"r.$c") as (s"${c.toLowerCase}_diff") )
And then use it with joined like:
joined.select( ( col("*") :+ diffCols ) :_*).show(false)
//+----------+----+----+----+-----+----+-----+-------+-------+
//|WEEK |DIM1|DIM2|T1 |T2 |T1 |T2 |t1_diff|t2_diff|
//+----------+----+----+----+-----+----+-----+-------+-------+
//|2016-04-02|14 |NULL|9874|880 |9879|820 |-5 |60 |
//|2017-06-10|15 |PQR |9867|57721|null|null |null |null |
//|2017-06-10|15 |XYZ |null|null |9967|57771|null |null |
//|2016-04-30|14 |FR |9875|13 |9785|9 |90 |4 |
//+----------+----+----+----+-----+----+-----+-------+-------+

You can do it by adding sequence number to each dataframe and later join those two dataframes based on seq number.
val df3 = df1.withColumn("SeqNum", monotonicallyIncreasingId)
val df4 = df2.withColumn("SeqNum", monotonicallyIncreasingId)
df3.as("l").join(df4.as("r"),"SeqNum").withColumn("t1_diff",col("l.T1") - col("r.T1")).withColumn("t2_diff",col("l.T2") - col("r.T2")).drop("SeqNum").show()

Related

Avoiding duplicate coulmns after nullSafeJoin scala spark

I have use case wherein I need to join nullable columns. I am doing the same like this :
def nullSafeJoin(leftDF: DataFrame, rightDF: DataFrame, joinOnColumns: Seq[String]) = {
val dataset1 = leftDF.alias("dataset1")
val dataset2 = rightDF.alias("dataset2")
val firstColumn = joinOnColumns.head
val colExpression: Column = (col(s"dataset1.$firstColumn").eqNullSafe(col(s"dataset2.$firstColumn")))
val fullExpr = joinOnColumns.tail.foldLeft(colExpression) {
(colExpression, p) => colExpression && (col(s"dataset1.$p").eqNullSafe(col(s"dataset2.$p")))
}
dataset1.join(dataset2, fullExpr)
}
The final joined dataset has duplicate columns. I have tried dropping the columns using the alias like this :
dataset1.join(dataset2, fullExpr).drop(s"dataset2.$firstColumn")
but it doesn't work.
I understand that instead of dropping we can do a select columns.
I am trying to have a generic code base so don't want to pass the list of columns to be selected to the function (In case of drop I would be having to just drop the list of joinOnColumns we have passed to the function)
Any pointers on how to solve this would be really helpful.
Thanks!
Edit : (Sample data )
leftDF :
+------------------+-----------+---------+---------+-------+
| A| B| C| D| status|
+------------------+-----------+---------+---------+-------+
| 14567| 37| 1| game|Enabled|
| 14567| BASE| 1| toy| Paused|
| 13478| null| 5| game|Enabled|
| 2001| BASE| 1| null| Paused|
| null| 37| 1| home|Enabled|
+------------------+-----------+---------+---------+-------+
rightDF :
+------------------+-----------+---------+
| A| B| C|
+------------------+-----------+---------+
| 140| 37| 1|
| 569| BASE| 1|
| 13478| null| 5|
| 2001| BASE| 1|
| null| 37| 1|
+------------------+-----------+---------+
Final Join (Required):
+------------------+-----------+---------+---------+-------+
| A| B| C| D| status|
+------------------+-----------+---------+---------+-------+
| 13478| null| 5| game|Enabled|
| 2001| BASE| 1| null| Paused|
| null| 37| 1| home|Enabled|
+------------------+-----------+---------+---------+-------+
Your final DataFrame has duplicate columns from both leftDF & rightDF, don't have identifier to check if that column is from leftDF or rightDF.
So I have renamed leftDF & rightDF columns. leftDF columns starts with left_[column_name] & rightDF columns starts with right_[column_name]
Hope below code will help you.
scala> :paste
// Entering paste mode (ctrl-D to finish)
val left = Seq(("14567", "37", "1", "game", "Enabled"), ("14567", "BASE", "1", "toy", "Paused"), ("13478", "null", "5", "game", "Enabled"), ("2001", "BASE", "1", "null", "Paused"), ("null", "37", "1", "home", "Enabled")).toDF("a", "b", "c", "d", "status")
val right = Seq(("140", "37", 1), ("569", "BASE", 1), ("13478", "null", 5), ("2001", "BASE", 1), ("null", "37", 1)).toDF("a", "b", "c")
import org.apache.spark.sql.DataFrame
def nullSafeJoin(leftDF: DataFrame, rightDF: DataFrame, joinOnColumns: Seq[String]):DataFrame = {
val leftRenamedDF = leftDF
.columns
.map(c => (c, s"left_${c}"))
.foldLeft(leftDF){ (df, c) =>
df.withColumnRenamed(c._1, c._2)
}
val rightRenamedDF = rightDF
.columns
.map(c => (c, s"right_${c}"))
.foldLeft(rightDF){(df, c) =>
df.withColumnRenamed(c._1, c._2)
}
val fullExpr = joinOnColumns
.tail
.foldLeft($"left_${joinOnColumns.head}".eqNullSafe($"right_${joinOnColumns.head}")){(cee, p) =>
cee && ($"left_${p}".eqNullSafe($"right_${p}"))
}
val finalColumns = joinOnColumns
.map(c => col(s"left_${c}").as(c)) ++ // Taking All columns from Join columns
leftDF.columns.diff(joinOnColumns).map(c => col(s"left_${c}").as(c)) ++ // Taking missing columns from leftDF
rightDF.columns.diff(joinOnColumns).map(c => col(s"right_${c}").as(c)) // Taking missing columns from rightDF
leftRenamedDF.join(rightRenamedDF, fullExpr).select(finalColumns: _*)
}
scala>
Final DataFrame result is :
scala> nullSafeJoin(left, right, Seq("a", "b", "c")).show(false)
// Exiting paste mode, now interpreting.
+-----+----+---+----+-------+
|a |b |c |d |status |
+-----+----+---+----+-------+
|13478|null|5 |game|Enabled|
|2001 |BASE|1 |null|Paused |
|null |37 |1 |home|Enabled|
+-----+----+---+----+-------+

How to map values in column(multiple columns also) of one dataset to other dataset

I am woking on graphframes part,where I need to have edges/links in d3.js to be in indexed values of Vertex/nodes as source and destination.
Now I have VertexDF as
+--------------------+-----------+
| id| rowID|
+--------------------+-----------+
| Raashul Tandon| 3|
| Helen Jones| 5|
----------------------------------
EdgesDF
+-------------------+--------------------+
| src| dst|
+-------------------+--------------------+
| Raashul Tandon| Helen Jones |
------------------------------------------
Now I need to transform this EdgesDF as below
+-------------------+--------------------+
| src| dst|
+-------------------+--------------------+
| 3 | 5 |
------------------------------------------
All the column values should be having the index of the names taken from VertexDF.I am expecting in Higher-order functions.
My approach is to convert VertexDF to map, then iterating the EdgesDF and replaces every occurence.
What I have Tried
made a map of name to ids
val Actmap = VertxDF.collect().map(f =>{
val name = f.getString(0)
val id = f.getLong(1)
(name,id)
})
.toMap
Used that map with EdgesDF
EdgesDF.collect().map(f => {
val src = f.getString(0)
val dst = f.getString(0)
val src_id = Actmap.get(src)
val dst_id = Actmap.get(dst)
(src_id,dst_id)
})
Your approach of collect-ing the vertex and edge dataframes would work only if they're small. I would suggest left-joining the edge and vertex dataframes to get what you need:
import org.apache.spark.sql.functions._
import spark.implicits._
val VertxDF = Seq(
("Raashul Tandon", 3),
("Helen Jones", 5),
("John Doe", 6),
("Rachel Smith", 7)
).toDF("id", "rowID")
val EdgesDF = Seq(
("Raashul Tandon", "Helen Jones"),
("Helen Jones", "John Doe"),
("Unknown", "Raashul Tandon"),
("John Doe", "Rachel Smith")
).toDF("src", "dst")
EdgesDF.as("e").
join(VertxDF.as("v1"), $"e.src" === $"v1.id", "left_outer").
join(VertxDF.as("v2"), $"e.dst" === $"v2.id", "left_outer").
select($"v1.rowID".as("src"), $"v2.rowID".as("dst")).
show
// +----+---+
// | src|dst|
// +----+---+
// | 3| 5|
// | 5| 6|
// |null| 3|
// | 6| 7|
// +----+---+

Comparing two dataframes in Spark

I'm comparing two dataframes in spark using except().
For exmaple: df.except(df2)
I will get all the records that are not available in df2 from df. However, I would like to list field details also which are not matching.
For example:
df:
------------------
id,name,age,city
101,kp,28,CHN
------------------
df2:
-----------------
id,name,age,city
101,kp,28,HYD
----------------
Expected output:
df3
--------------------------
id,name,age,city,diff
101,kp,28,CHN,City is not matching
--------------------------------
How can I acheive this?
Use intersect to get the values common to both DataFrames,then build your not matching logic
intersect -returns a new Dataset containing rows only in both this Dataset and another Dataset.
df.intersect(df2)
return a new RDD that contains the intersection of elements in the source dataset and the argument.
intersection(anotherrdd) returns the elements which are present in both the DF.
intersection(anotherrdd) remove all the duplicate including duplicated in single DF
Newer again attempt on the above but not possible elegantly, but with JOIN as opposed to except. Best I can do.
I believe it does what you need and takes into the fact there are things in one data set or not.
Run under Databricks.
case class Person(personid: Int, personname: String, cityid: Int)
import org.apache.spark.rdd.RDD
import org.apache.spark.sql.functions._
val df1 = Seq(
Person(0, "AgataZ", 0),
Person(1, "Iweta", 0),
Person(2, "Patryk", 2),
Person(9999, "Maria", 2),
Person(5, "John", 2),
Person(6, "Patsy", 2),
Person(7, "Gloria", 222),
Person(3333, "Maksym", 0)).toDF
val df2 = Seq(
Person(0, "Agata", 0),
Person(1, "Iweta", 0),
Person(2, "Patryk", 2),
Person(5, "John", 2),
Person(6, "Patsy", 333),
Person(7, "Gloria", 2),
Person(4444, "Hans", 3)).toDF
val joined = df1.join(df2, df1("personid") === df2("personid"), "outer")
val newNames = Seq("personId1", "personName1", "personCity1", "personId2", "personName2", "personCity2")
val df_Renamed = joined.toDF(newNames: _*)
// Some deliberate variation shown in approach for learning
val df_temp = df_Renamed.filter($"personCity1" =!= $"personCity2" || $"personName1" =!= $"personName2" || $"personName1".isNull || $"personName2".isNull || $"personCity1".isNull || $"personCity2".isNull).select($"personId1", $"personName1".alias("Name"), $"personCity1", $"personId2", $"personName2".alias("Name2"), $"personCity2"). withColumn("PersonID", when($"personId1".isNotNull, $"personId1").otherwise($"personId2"))
val df_final = df_temp.withColumn("nameChange ?", when($"Name".isNull or $"Name2".isNull or $"Name" =!= $"Name2", "Yes").otherwise("No")).withColumn("cityChange ?", when($"personCity1".isNull or $"personCity2".isNull or $"personCity1" =!= $"personCity2", "Yes").otherwise("No")).drop("PersonId1").drop("PersonId2")
df_final.show()
gives:
+------+-----------+------+-----------+--------+------------+------------+
| Name|personCity1| Name2|personCity2|PersonID|nameChange ?|cityChange ?|
+------+-----------+------+-----------+--------+------------+------------+
| Patsy| 2| Patsy| 333| 6| No| Yes|
|Maksym| 0| null| null| 3333| Yes| Yes|
| null| null| Hans| 3| 4444| Yes| Yes|
|Gloria| 222|Gloria| 2| 7| No| Yes|
| Maria| 2| null| null| 9999| Yes| Yes|
|AgataZ| 0| Agata| 0| 0| Yes| No|
+------+-----------+------+-----------+--------+------------+------------+

Comparing DataFrames in Spark

I have 2 dataframes
df1
+----------+----------------+--------------------+--------------+-------------+
| WEEK|DIM1 |DIM2 |T1 | T2 |
+----------+----------------+--------------------+--------------+-------------+
|2016-04-02| 14| NULL| 9874| 880 |
|2016-04-30| 14|FR | 9875| 13 |
|2017-06-10| 15| PQR| 9867| 57721 |
+----------+----------------+--------------------+--------------+-------------+
df2
+----------+----------------+--------------------+--------------+-------------+
| WEEK|DIM1 |DIM2 |T1 | T2 |
+----------+----------------+--------------------+--------------+-------------+
|2016-04-02| 14| NULL| 9879| 820 |
|2016-04-30| 14|FR | 9785| 9 |
|2017-06-10| 15| XYZ| 9967| 57771 |
+----------+----------------+--------------------+--------------+-------------+
I want to write a comparator in spark which compares T1, T2 in both dataframes upon WEEK, DIM1, DIM2 with T1, T2 in df1 should be greater than T1, T2 by 3. I want to return all rows which do not match the above criterion with difference between T1, T2 among dataframes. I also want to have rows present in df1 not present in df2 and vice versa for the following combination WEEK, DIM1, DIM2.
The output should be like this
+----------+----------------+--------------------+--------------+-------------+------------------+-----------------+
| WEEK|DIM1 |DIM2 |T1_dIFF | T2_dIFF | Presenent_In_DF1 | Presenent_In_DF2|
+----------+----------------+--------------------+--------------+-------------+------------------+-----------------+
|2016-04-30| 14|FR | 90| 4 | Y | Y |
|2017-06-10| 15|PQR | 9867| 57721 | Y | N |
|2017-06-10| 15|XYZ | 9967| 57771 | N | Y |
+----------+----------------+--------------------+--------------+-------------+------------------+-----------------+
What is the best way to go around this ?
I have implemented the following but do not know how to proceed after this -
val df1 = Seq(
("2016-04-02", "14", "NULL", 9874, 880), ("2016-04-30", "14", "FR", 9875, 13), ("2017-06-10", "15", "PQR", 9867, 57721)
).toDF("WEEK", "DIM1", "DIM2","T1","T2")
val df2 = Seq(
("2016-04-02", "14", "NULL", 9879, 820), ("2016-04-30", "14", "FR", 9785, 9), ("2017-06-10", "15", "XYZ", 9967, 57771)
).toDF("WEEK", "DIM1", "DIM2","T1","T2")
import org.apache.spark.sql.functions._
val joined = df1.as("l").join(df2.as("r"), Seq("WEEK", "DIM1", "DIM2"), "fullouter")
The joined look like this -
+----------+----+----+----+-----+----+-----+
| WEEK|DIM1|DIM2| T1| T2| T1| T2|
+----------+----+----+----+-----+----+-----+
|2016-04-02| 14|NULL|9874| 880|9879| 820|
|2017-06-10| 15| PQR|9867|57721|null| null|
|2017-06-10| 15| XYZ|null| null|9967|57771|
|2016-04-30| 14| FR|9875| 13|9785| 9|
+----------+----+----+----+-----+----+-----+
I do not know how to proceed after this in a good way, relatively new to scala.
One easy solution could be to join df1 and df2 with the WEEK as unique Key. In the joined data you need to keep all the columns from df1 and df2.
Then you can do a map operation on the dataframe to produce the rest of the columns.
Something like
df1.createOrReplaceTempTable("df1")
df2.createOrReplaceTempTable("df2")
val df = spark.sql("select df1.*, df2.DIM1 as df2_DIM1, df2.DIM2 as df2_DIM2, df2.T1 as df2_T1, df2.T2 as df2_T2 from df1 join df2 on df1.WEEK = df2.WEEK")
// Now map on the dataframe to produce the diff dataframe
// Or you can use the SQL to do that.

I need to compare two dataframes for type validation and send a nonzero value as output

I am comparing two dataframes (basically these are schema of two different data sources one from hive and other from SAS9.2)
I need to validate structure for both data sources so I converted schema into two dataframes and here they are:
SAS schema will be in below format:
scala> metadata.show
+----+----------------+----+---+-----------+-----------+
|S_No| Variable|Type|Len| Format| Informat|
+----+----------------+----+---+-----------+-----------+
| 1| DATETIME| Num| 8|DATETIME20.|DATETIME20.|
| 2| LOAD_DATETIME| Num| 8|DATETIME20.|DATETIME20.|
| 3| SOURCE_BANK|Char| 1| null| null|
| 4| EMP_NAME|Char| 50| null| null|
| 5|HEADER_ROW_COUNT| Num| 8| null| null|
| 6| EMP_HOURS| Num| 8| 15.2| 15.1|
+----+----------------+----+---+-----------+-----------+
Similarly hive metadata will be in below format:
df2.show
+----------------+-------------+
| Variable| type|
+----------------+-------------+
| datetime|TimestampType|
| load_datetime|TimestampType|
| source_bank| StringType|
| emp_name| StringType|
|header_row_count| IntegerType|
| emp_hours| DoubleType|
+----------------+-------------+
Now, I need to compare both these on column type and validate structure.Like for "Num" type equivalent is "Integertype".
Finally I need to store anon zero value as output if schema validation is successful
How can I achieve this ?
you can join the two dataframes and then compare the two columns corressponding to the columns type via a Map and UDF.
This is a code sample that does that.
You need to complete the map with the right values
val sqlCtx = sqlContext
import sqlCtx.implicits._
val metadata: DataFrame= Seq(
(Some("1"), "DATETIME", "Num", "8", "DATETIME20", "DATETIME20"),
(Some("3"), "SOURCEBANK", "Num", "1", "null", "null")
).toDF("SNo", "Variable", "Type", "Len", "Format", "Informat")
val metadataAdapted: DataFrame = metadata
.withColumn("Name", functions.upper(col("Variable")))
.withColumnRenamed("Type", "TypeHive")
val sasDF = Seq(("datetime", "TimestampType"),
("datetime", "TimestampType")
).toDF("variable", "type")
val sasDFAdapted = sasDF
.withColumn("Name", functions.upper(col("variable")))
.withColumnRenamed("Type", "TypeSaS")
val res = sasDFAdapted.join(metadataAdapted, Seq("Name"), "inner")
val map = Map("TimestampType" -> "Num")
def udfType(dict: Map[String, String]) = functions.udf( (typeVar: String) => dict(typeVar))
val result = res.withColumn("correctMapping", udfType(map)(col("TypeSaS")) === col("TypeHive"))