Spark Dataframes: Add Conditional column to dataframe - scala

I want to add a conditional column Flag to dataframe A. When the following two conditions are satisfied, add 1 to Flag, otherwise 0:
num from dataframe A is in between numStart and numEnd from dataframe B.
If the above condition satifies, check if include is 1.
DataFrame A (it's a very big dataframe, containing millions of rows):
+----+------+-----+------------------------+
|num |food |price|timestamp |
+----+------+-----+------------------------+
|1275|tomato|1.99 |2018-07-21T00:00:00.683Z|
|145 |carrot|0.45 |2018-07-21T00:00:03.346Z|
|2678|apple |0.99 |2018-07-21T01:00:05.731Z|
|6578|banana|1.29 |2018-07-20T01:11:59.957Z|
|1001|taco |2.59 |2018-07-21T01:00:07.961Z|
+----+------+-----+------------------------+
DataFrame B (it's a very small DF, containing only 100 rows):
+----------+-----------+-------+
|numStart |numEnd |include|
+----------+-----------+-------+
|0 |200 |1 |
|250 |1050 |0 |
|2000 |3000 |1 |
|10001 |15001 |1 |
+----------+-----------+-------+
Expected output:
+----+------+-----+------------------------+----------+
|num |food |price|timestamp |Flag |
+----+------+-----+------------------------+----------+
|1275|tomato|1.99 |2018-07-21T00:00:00.683Z|0 |
|145 |carrot|0.45 |2018-07-21T00:00:03.346Z|1 |
|2678|apple |0.99 |2018-07-21T01:00:05.731Z|1 |
|6578|banana|1.29 |2018-07-20T01:11:59.957Z|0 |
|1001|taco |2.59 |2018-07-21T01:00:07.961Z|0 |
+----+------+-----+------------------------+----------+

You can left-join dfB to dfA based on the condition you described in (i), then build a Flag column using withColumn and the coalesce function to "default" to 0:
Records for which a match was found would use the include value of the matching dfB record
Records for which there was no match would have include=null, and per your requirement such records should get Flag=0, so we use coalesce which in case of null returns the default value with a literal lit(0)
Lastly, get rid of the dfB columns which are of no interest to you:
import org.apache.spark.sql.functions._
import spark.implicits._ // assuming "spark" is your SparkSession
dfA.join(dfB, $"num".between($"numStart", $"numEnd"), "left")
.withColumn("Flag", coalesce($"include", lit(0)))
.drop(dfB.columns: _*)
.show()
// +----+------+-----+--------------------+----+
// | num| food|price| timestamp|Flag|
// +----+------+-----+--------------------+----+
// |1275|tomato| 1.99|2018-07-21T00:00:...| 0|
// | 145|carrot| 0.45|2018-07-21T00:00:...| 1|
// |2678| apple| 0.99|2018-07-21T01:00:...| 1|
// |6578|banana| 1.29|2018-07-20T01:11:...| 0|
// |1001| taco| 2.59|2018-07-21T01:00:...| 0|
// +----+------+-----+--------------------+----+

Join the two dataframes together on the first condition while keeping all rows in dataframe A (i.e. with a left join, see code below). After the join, the include column can be renamed Flag and any NaN values inside it are set to 0. The two extra columns, numStart and numEnd are dropped.
The code can thus be written as follows:
A.join(B, $"num" >= $"numStart" && $"num" <= $"numEnd", "left")
.withColumnRenamed("include", "Flag")
.drop("numStart", "numEnd")
.na.fill(Map("Flag" -> 0))

Related

Ambiguous Column in DataFrame Join - Unable to Alias or Call

Getting into databricks from a SQL background and working with some dataframe samples for joining for basic transformations, and I am having issues isolating the correct dataframe.column for other transformations after the join.
For DF1, I have 3 columns: user_id, user_ts, email. For DF2, I have two columns: email, converted.
Below is how I have the logic for the join. This works and returns 5 columns; however, there are two email columns in the schema
df3 = (df1
.join(df2, df1.email == df2.email, "outer")
)
I am trying to do some basic transformation on the df2 email as part the dataframe string, but I receive the error:
"Cannot resolve column name "df2.email" among (user_id, user_ts, email, email, converted)"
df3 = (df1
.join(df2, df1.email == df2.email, "outer")
.na.fill(False,["df2.email"])
)
If I remove the df2 from the fill(), I get the error that the columns are ambiguous.
How can I define which column I want to do a transformation on if it has the same column name as a second column. In SQL, I just use a table alias predicate for the column, but this doesn't seem to be how pyspark is bested used.
Suggestions?
If you want to avoid both key columns in the join result and get combined result then you can pass list of key columns as an argument to join() method.
If you want to retain same key columns from both dataframes then you have to rename one of the column name before doing transformation, otherwise spark will throw ambiguous column error.
df1 = spark.createDataFrame([(1, 'abc#gmail.com'),(2,'def#gmail.com')],["id1", "email"])
df2 = spark.createDataFrame([(1, 'abc#gmail.com'),(2,'ghi#gmail.com')],["id2", "email"])
df1.join(df2,['email'], 'outer').show()
'''
+-------------+----+----+
| email| id1| id2|
+-------------+----+----+
|def#gmail.com| 2|null|
|ghi#gmail.com|null| 2|
|abc#gmail.com| 1| 1|
+-------------+----+----+'''
df1.join(df2,df1['email'] == df2['email'], 'outer').show()
'''
+----+-------------+----+-------------+
| id1| email| id2| email|
+----+-------------+----+-------------+
| 2|def#gmail.com|null| null|
|null| null| 2|ghi#gmail.com|
| 1|abc#gmail.com| 1|abc#gmail.com|
+----+-------------+----+-------------+'''
df1.join(df2,df1['email'] == df2['email'], 'outer') \
.select('id1', 'id2', df1['email'], df2['email'].alias('email2')) \
.na.fill('False','email2').show()
'''
+----+----+-------------+-------------+
| id1| id2| email| email2|
+----+----+-------------+-------------+
| 2|null|def#gmail.com| False|
|null| 2| null|ghi#gmail.com|
| 1| 1|abc#gmail.com|abc#gmail.com|
+----+----+-------------+-------------+ '''

pyspark: filtering rows by length of inside values

I have a PySpark dataframe with a column contains Python list
id value
1 [1,2,3]
2 [1,2]
I want to remove all rows with len of the list in value column is less than 3.
So I tried:
df.filter(len(df.value) >= 3)
and indeed it does not work.
How can I filter the dataframe by the length of the inside data?
Refer to this link -
size() - It returns the length of the array or map stored in the column.
from pyspark.sql.functions import size
myValues = [(1,[1,2,3]),(2,[1,2])]
df = sqlContext.createDataFrame(myValues,['id','value'])
df.show()
+----+---------+
| id| value|
+--------------+
| 1| [1,2,3]|
| 2| [1,2]|
+----+---------+
df = df.filter(size(df.value) >= 3).show()
+----+---------+
| id| value|
+--------------+
| 1| [1,2,3]|
+----+---------+

remove duplicate column from dataframe using scala

I need to remove one column from the DataFrame having another column with the same name. I need to remove only one column and need the other one for further usage.
For example, given this input DF:
sno | age | psk | psk
---------------------
1 | 12 | a4 | a4
I would like to obtain this output DF:
sno | age | psk
----------------
1 | 12 | a4
RDD is the way (but you need to know the column index of the duplicate columns for removing duplicate columns back to dataframe)
If you have dataframe with duplicate columns as
+---+---+---+---+
|sno|age|psk|psk|
+---+---+---+---+
|1 |12 |a4 |a4 |
+---+---+---+---+
You know that the last two column index are duplicates.
Next step is for you to have column names with duplicates removed and form schema
val columns = df.columns.toSet.toArray
val schema = StructType(columns.map(name => StructField(name, StringType, true)))
Vital part is to convert the dataframe to rdd and remove the required column index (here it is the 4th)
val rdd = df.rdd.map(row=> Row.fromSeq(Seq(row(0).toString, row(1).toString, row(2))))
Final step is to convert the rdd to dataframe using schema
sqlContext.createDataFrame(rdd, schema).show(false)
which should give you
+---+---+---+
|sno|age|psk|
+---+---+---+
|1 |12 |a4 |
+---+---+---+
I hope the answer is helpful

Scala how to match two dfs if mathes then update the key in first df

I have data in two dataframes:
selectedPersonDF:
ID key
1
2
3
4
5
selectedDetailsDF:
first second third key
--------------------------
1 9 9 777
9 8 8 878
8 10 10 765
10 12 19 909
11 2 20 708
Code :
val personDF = spark.read.option("header", "true").option("inferSchema", "false").csv("person.csv")
val detailsDF = spark.read.option("header", "true").option("inferSchema", "false").csv("details.csv")
val selectedPersonDF=personDF.select((col("ID"),col("key"))).show()
val selectedDetailsDF=detailsDF.select(col("first"),col("second"),col("third"),col("key")).show()
I have to match the selectedPersonDF id column with selectedDetailsDF all the columns(First, Second, Third) if any of the column data matches with persons id then we have to take the key value from selectedDetailsDF and have to update in selectedPersonDF key column.
Expected output (in selectedPersonDF):
ID key
1 777
2 708
3
4
5
and after removing the first row from persons'df since its matched with detailsdf remaining data should be stored in another df.
You can use join and use || condition checking and left join as
val finalDF = selectedPersonDF.join(selectedDetailsDF.withColumnRenamed("key", "key2"), $"ID" === $"first" || $"ID" === $"second" || $"ID" === $"third", "left")
.select($"ID", $"key2".as("key"))
.show(false)
so finalDF should give you
+---+----+
|ID |key |
+---+----+
|1 |777 |
|2 |708 |
|3 |null|
|4 |null|
|5 |null|
+---+----+
We can call .na.fill("") on above dataframe (key column has to be StringType()) to get
+---+---+
|ID |key|
+---+---+
|1 |777|
|2 |708|
|3 | |
|4 | |
|5 | |
+---+---+
After that you can use filter to separate the dataframe into matching and non matching using key column with value and null repectively
val notMatchingDF = finalDF.filter($"key" === "")
val matchingDF = finalDF.except(notMatchingDF)
Updated for if the column names of selectedDetailsDF is unknown except the key column
If the column names of the second dataframe is unknown then you will have to form an array column of the unknown columns as
val columnsToCheck = selectedDetailsDF.columns.toSet - "key" toList
import org.apache.spark.sql.functions._
val tempSelectedDetailsDF = selectedDetailsDF.select(array(columnsToCheck.map(col): _*).as("array"), col("key").as("key2"))
Now tempSelectedDetailsDF dataframe has two columns: combined column of all the unknown columns as an array column and the key column renamed as key2.
After that you would need a udf function for checking the condition while joining
val arrayContains = udf((array: collection.mutable.WrappedArray[String], value: String) => array.contains(value))
And then you join the dataframes using the call to the defined udf function as
val finalDF = selectedPersonDF.join(tempSelectedDetailsDF, arrayContains($"array", $"ID"), "left")
.select($"ID", $"key2".as("key"))
.na.fill("")
Rest of the process is already defined above.
I hope the answer is helpful and understandable.

Spark SQL Dataframe API -build filter condition dynamically

I have two Spark dataframe's, df1 and df2:
+-------+-----+---+
| name|empNo|age|
+-------+-----+---+
|shankar|12121| 28|
| ramesh| 1212| 29|
| suresh| 1111| 30|
| aarush| 0707| 15|
+-------+-----+---+
+------+-----+---+-----+
| eName| eNo|age| city|
+------+-----+---+-----+
|aarush|12121| 15|malmo|
|ramesh| 1212| 29|malmo|
+------+-----+---+-----+
I need to get the non matching records from df1, based on a number of columns which is specified in another file.
For example, the column look up file is something like below:
df1col,df2col
name,eName
empNo, eNo
Expected output is:
+-------+-----+---+
| name|empNo|age|
+-------+-----+---+
|shankar|12121| 28|
| suresh| 1111| 30|
| aarush| 0707| 15|
+-------+-----+---+
The idea is how to build a where condition dynamically for the above scenario, because the lookup file is configurable, so it might have 1 to n fields.
You can use the except dataframe method. I'm assuming that the columns to use are in two lists for simplicity. It's necessary that the order of both lists are correct, the columns on the same location in the list will be compared (regardless of column name). After except, use join to get the missing columns from the first dataframe.
val df1 = Seq(("shankar","12121",28),("ramesh","1212",29),("suresh","1111",30),("aarush","0707",15))
.toDF("name", "empNo", "age")
val df2 = Seq(("aarush", "12121", 15, "malmo"),("ramesh", "1212", 29, "malmo"))
.toDF("eName", "eNo", "age", "city")
val df1Cols = List("name", "empNo")
val df2Cols = List("eName", "eNo")
val tempDf = df1.select(df1Cols.head, df1Cols.tail: _*)
.except(df2.select(df2Cols.head, df2Cols.tail: _*))
val df = df1.join(broadcast(tempDf), df1Cols)
The resulting dataframe will look as wanted:
+-------+-----+---+
| name|empNo|age|
+-------+-----+---+
| aarush| 0707| 15|
| suresh| 1111| 30|
|shankar|12121| 28|
+-------+-----+---+
If you're doing this from a SQL query I would remap the column names in the SQL query itself with something like Changing a SQL column title via query. You could do a simple text replace in the query to normalize them to the df1 or df2 column names.
Once you have that you can diff using something like
How to obtain the difference between two DataFrames?
If you need more columns that wouldn't be used in the diff (e.g. age) you can reselect the data again based on your diff results. This may not be the optimal way of doing it but it would probably work.