Implementing SQL logic via Dataframes using Spark and Scala - scala

I have three columns (c1, c2, c3) in a Hive table t1. I have MySQL code that checks whether specific columns are null. I have the dataframe from the same table. I would like to implement the same logic via dataframe, df which has three columns, c1, c2, c3.
Here is the SQL-
if(
t1.c1=0 Or IsNull(t1.c1),
if(
IsNull(t1.c2/t1.c3),
1,
t1.c2/t1.c3
),
t1.c1
) AS myalias
I had drafted the following logic in scala using "when" as an alternative to "if" of SQL. I am facing problem writing "Or" logic(bolded below). How can I write the above SQL logic via Spark dataframe using Scala?
val df_withalias = df.withColumn("myalias",when(
Or((df("c1") == 0), isnull(df("c1"))),
when(
(isNull((df("c2") == 0)/df("c3")),
)
)
)
How can I write the above logic?

First, you can use Column's || operator to construct logical OR conditions. Also - note that when takes only 2 arguments (condition and value), and if you want to supply an alternative value (to be used if condition isn't met) - you need to use .otherwise:
val df_withalias = df.withColumn("myalias",
when(df("c1") === 0 || isnull(df("c1")),
when(isnull(df("c2")/df("c3")), 1).otherwise(df("c2")/df("c3"))
).otherwise(df("c1"))
)

Related

pyspark join on a secondary column when primary column finds no match

The following code is intended to do this operation, however, does not work. It still joins on both columns yielding two rows when I should only get one row.
df = df.join(
msa,
(msa.msa_join_key == df.metro_join_key)
| (
(msa.msa_join_key != df.metro_join_key)
& (msa.msa_join_key == df.state_join_key)
),
"left",
)
This code should work, so I am not sure why it does not. I've seen similar questions that also cannot solve this given that they too yield additional rows. pyspark - join with OR condition
Is this a known bug in pyspark? Is there a different way to do this join?

How to refer to columns containing f-strings in a Pyspark function?

I am writing a function for a Spark DF that performs operations on columns and gives them a suffix, such that I can run the function twice on two different suffixes and join them later.
I am having a time of figuring out the best way to refer to them however in this particular bit of code and was wondering what I am missing?
def calc_date(sdf, suffix):
final_sdf = (
sdf.withColumn(
f"lowest_days{suffix}",
f"sdf.list_of_days_{suffix}"[0],
)
.withColumn(
f"earliest_date_{suffix}",
f"sdf.list_of_dates_{suffix}"[0],
)
.withColumn(
f"actual_date_{suffix}",
spark_fns.expr(
f"date_sub(earliest_date_{suffix}, lowest_days{suffix})"
),
)
)
Here I am trying to pull the first value from two lists (list_of_days and list_of_dates) and perform a date calculation to create a new variable (actual_date).
I would like to do this in a function so that I don't have to do the same set of operations twice (or more) depending on the number of suffixes I have?
But the f-strings give an error col should be Column.
Any help on this would be greatly appreciated!
You need to wrap the second argument with a col().
from pyspark.sql.functions import *
def calc_date(sdf, suffix):
final_sdf = (
sdf.withColumn(
f"lowest_days{suffix}",
col(f"list_of_days_{suffix}")[0],
)
.withColumn(
f"earliest_date_{suffix}",
col(f"list_of_dates_{suffix}")[0],
)
)

Variable substitution in scala

I have two dataframes in scala both having data from two different tables but of same structure (srcdataframe and tgttable). I have to join these two based on composite primary key and select few columns and append two columns the code for which is as below:
for(i <- 2 until numCols) {
srcdataframe.as("A")
.join(tgttable.as("B"), $"A.INSTANCE_ID" === $"B.INSTANCE_ID" &&
$"A.CONTRACT_LINE_ID" === $"B.CONTRACT_LINE_ID", "inner")
.filter($"A." + srcColnm(i) =!= $"B." + srcColnm(i))
.select($"A.INSTANCE_ID",
$"A.CONTRACT_LINE_ID",
"$"+"\""+"A."+srcColnm(i)+"\""+","+"$"+"\""+"B."+srcColnm(i)+"\"")
.withColumn("MisMatchedCol",lit("\""+srcColnm(i)+"\""))
.withColumn("LastRunDate",current_timestamp.cast("long"))
.createOrReplaceTempView("IPF_1M_Mismatch");
hiveSQLContext.sql("Insert into table xxxx.f2f_Mismatch1 select t.* from (select * from IPF_1M_Mismatch) t");}
Here are the things am trying to do:
Inner join of srcdataframe and tgttable based on instance_id and contract_line_id.
Select only instance_id, contract_line_id, mismatched_col_values, hardcode of mismatched_col_nm, timestamp.
srcColnm(i) is an array of strings which contains the non-primary keys to be compared.
However, I am not able to resolve the variables inside the dataframe in the for loop. I tried looking up for solutions here and here. I got to know that it may be because of the way spark substitutes the variables only at compile time, in this case I'm not sure how to resolve it.
Instead of creating columns with $, you can simply use strings or the col() function. I would also recommend performing the join outside of the for as it's an expensive operation. Slightly changed code, the main difference to solve your problem is in the select:
val df = srcdataframe.as("A")
.join(tgttable.as("B"), Seq("INSTANCE_ID", "CONTRACT_LINE_ID"), "inner")
for(columnName <- srcColnm) {
df.filter(col("A." + columnName) =!= col("B." + columnName))
.select("INSTANCE_ID", "CONTRACT_LINE_ID", "A." + columnName, "B." + columnName)
.withColumn("MisMatchedCol", lit(columnName))
.withColumn("LastRunDate", current_timestamp().cast("long"))
.createOrReplaceTempView("IPF_1M_Mismatch")
// Hive command
}
Regarding the problem in select:
$ is short for the col() function, it's selecting a column in the dataframe by name. The problem in the select is that the two first arguments col("A.INSTANCE_ID") and col("A.CONTRACT_LINE_ID") are two columns ($replaced bycol()` for clarity).
However, the next two arguments are strings. It is not possible to mix these two, either all arguments should be columns or all are strings. As you used "A."+srcColnm(i) to build up the column name $ can't be used, however, you could have used col("A."+srcColnm(i)).

Minus logic implementation not working with spark/scala

Minus Logic in Hive:
The below (Hive)query will return only records available in left side table ( Full_Table ft), but not in both.
Select ft.* from Full_Table ft left join Stage_Table stg where stg.primary_key1 IS null and stg.primary_key2 IS null
I tried to implement the same in spark/scala using following method ( To support both primary key and composite key ) , But joined result set does not have column from right table, because of that not able to apply stg.primary_key2 IS null condition in joined result set.
ft.join(stg,usingColumns, “left_outer”) // used seq to support composite key column join
Please suggest me how to implement minus logic in spark scala.
Thanks,
Saravanan
https://www.linkedin.com/in/saravanan303/
If your tables have the same columns you can use except method from DataSet:
fullTable.except(stageTable)
If they don't have, but you are interested only on subset of columns that exists in both tables you can first select those column using select transformation and than use except:
val fullTableSelectedColumns = fullTable.select(c1,c2,c3)
val stageTableSelectedColumns = stageTable.select(c1,c2,c3)
fullTableSelectedColumns.except(stageTableSelectedColumns)
On other case, you can use join and filter transformations:
fullTable
.join(stageTable, fullTable("primary_key") === stageTable("primary_key"), "left")
.filter(stageTable("primary_key1").isNotNull)

Replace Empty values with nulls in Spark Dataframe

I have a data frame with n number of columns and I want to replace empty strings in all these columns with nulls.
I tried using
val ReadDf = rawDF.na.replace("columnA", Map( "" -> null));
and
val ReadDf = rawDF.withColumn("columnA", if($"columnA"=="") lit(null) else $"columnA" );
Both of them didn't work.
Any leads would be highly appreciated. Thanks.
Your first approach seams to fail due to a bug that prevents replace from being able to replace values with nulls, see here.
Your second approach fails because you're confusing driver-side Scala code for executor-side Dataframe instructions: your if-else expression would be evaluated once on the driver (and not per record); You'd want to replace it with a call to when function; Moreover, to compare a column's value you need to use the === operator, and not Scala's == which just compares the driver-side Column object:
import org.apache.spark.sql.functions._
rawDF.withColumn("columnA", when($"columnA" === "", lit(null)).otherwise($"columnA"))