How to get access to the full Row in Spark UDF function? - scala

We are using plain SQL syntax to transform the data and have custom UDF functions.
Example:
UDF_FUNCTION(String, Int)
This function could produce exceptions and we would like to provide detailed error for the user. In the row we have information about the file and row_id, that is why we want to access a full row in mentioned UDF to provide detailed error with file_uuid and row_id for example. Does someone have any idea about that?
Thanks

You can can use struct function to send all columns to the udf. You have to use Row type as the input parameter in the udf´s anonymous function. Something like the example below:
def udf_full_row = udf { (row: Row) =>
val your_transformed_int = (row.getAs[Int]("value as int") + 1)
your_transformed_int
}
import org.apache.spark.sql.functions.{col, struct}
val df_test : DataFrame = ???
val cols_array = df_test.columns.map(col(_))
df_test.withColumn("your_new_colun", udf_full_row(struct(cols_array: _*)))

Related

Find columns to select, for spark.read(), from another Dataset - Spark Scala

I have a Dataset[Year] that has the following schema:
case class Year(day: Int, month: Int, Year: Int)
Is there any way to make a collection of the current schema?
I have tried:
println("Print -> "+ds.collect().toList)
But the result were:
Print -> List([01,01,2022], [31,01,2022])
I expected something like:
Print -> List(Year(01,01,2022), Year(31,01,2022)
I know that with a map I can adjust it, but I am trying to create a generic method that accepts any schema, and for this I cannot add a map doing the conversion.
That is my method:
class SchemeList[A]{
def set[A](ds: Dataset[A]): List[A] = {
ds.collect().toList
}
}
Apparently the method return is getting the correct signature, but when running the engine, it gets an error:
val setYears = new SchemeList[Year]
val YearList: List[Year] = setYears.set(df)
Exception in thread "main" java.lang.ClassCastException: org.apache.spark.sql.catalyst.expressions.GenericRowWithSchema cannot be cast to schemas.Schemas$Year
Based on your additional information in your comment:
I need this list to use as variables when creating another dataframe via jdbc (I need to make a specific select within postgresql). Is there a more performative way to pass values from a dataframe as parameters in a select?
Given your initial dataset:
val yearsDS: Dataset[Year] = ???
and that you want to do something like:
val desiredColumns: Array[String] = ???
spark.read.jdbc(..).select(desiredColumns.head, desiredColumns.tail: _*)
You could find the column names of yearsDS by doing:
val desiredColumns: Array[String] = yearsDS.columns
Spark achieves this by using def schema, which is defined on Dataset.
You can see the definition of def columns here.
May be you got a DataFrame,not a DataSet.
try to use "as" to transform dataframe to dataset.
like this
val year = Year(1,1,1)
val years = Array(year,year).toList
import spark.implicits._
val df = spark.
sparkContext
.parallelize(years)
.toDF("day","month","Year")
.as[Year]
println(df.collect().toList)

Update date format in spark dataframe for multiple spark columns

I have a Spark dataframe where few columns having a different type of date format.
To handle this I have written below code to keep a consistent type of format for all the date columns.
As the date column date format may get change every time hence I have defined a set of date formats in dt_formats.
def to_timestamp_multiple(s: Column, formats: Seq[String]): Column = {
coalesce(formats.map(fmt => to_timestamp(s, fmt)):_*)
}
val dt_formats= Seq("dd-MMM-yyyy", "MMM-dd-yyyy", "yyyy-MM-dd","MM/dd/yy","dd-MM-yy","dd-MM-yyyy","yyyy/MM/dd","dd/MM/yyyy")
val newDF = df.withColumn("ETD1", date_format(to_timestamp_multiple($"ETD",Seq("dd-MMM-yyyy", dt_formats)).cast("date"), "yyyy-MM-dd")).drop("ETD").withColumnRenamed("ETD1","ETD")
But here I have to create a new column then I have to drop older column then rename the new column.
that make the code unnecessary very clumsy hence I want to get override from this code.
I am trying to implement similar functionality by writing a Scala below function but it is throwing the exception org.apache.spark.sql.catalyst.parser.ParseException:, but I am unable to identify the what change I should made to make it work..
val CleansedData= rawDF.selectExpr(rawDF.columns.map(
x => { x match {
case "ETA" => s"""date_format(to_timestamp_multiple($x, dt_formats).cast("date"), "yyyy-MM-dd") as ETA"""
case _ => x
} } ) : _*)
Hence seeking help.
Thanks in advance.
Create a UDF in order to use with select. The select method takes columns and produces another DataFrame.
Also, instead of using coalesce, it might be more straightforward simply to build a parser that handles all of the formats. You can use DateTimeFormatterBuilder for this.
import java.time.format.DateTimeFormatter
import java.time.format.DateTimeFormatterBuilder
import org.apache.spark.sql.functions.udf
import java.time.LocalDate
import scala.util.Try
import java.sql.Date
val dtFormatStrings:Seq[String] = Seq("dd-MMM-yyyy", "MMM-dd-yyyy", "yyyy-MM-dd","MM/dd/yy","dd-MM-yy","dd-MM-yyyy","yyyy/MM/dd","dd/MM/yyyy")
// use foldLeft with appendOptional method, which for each format,
// returns a new builder with that additional possible format
val initBuilder = new DateTimeFormatterBuilder()
val builder: DateTimeFormatterBuilder = dtFormatStrings.foldLeft(initBuilder)(
(b: DateTimeFormatterBuilder, s:String) => b.appendOptional(DateTimeFormatter.ofPattern(s)))
val formatter = builder.toFormatter()
// Create the UDF, which just takes
// any function returning a sql-compatible type (java.sql.Date, here)
def toTimeStamp2(dateString:String): Date = {
val dateTry: Try[Date] = Try(java.sql.Date.valueOf(LocalDate.parse(dateString, formatter)))
dateTry.toOption.getOrElse(null)
}
val timeConversionUdf = udf(toTimeStamp2 _)
// example DF and new DF
val df = Seq(("05/08/20"), ("2020-04-03"), ("unparseable")).toDF("ETD")
df.select(timeConversionUdf(col("ETD"))).toDF("ETD2").show
Output:
+----------+
| ETD2|
+----------+
|2020-05-08|
|2020-04-03|
| null|
+----------+
Note that unparseable values end up null, as shown.
try withColumn(...) with same name and coalesce as below-
val dt_formats= Seq("dd-MMM-yyyy", "MMM-dd-yyyy", "yyyy-MM-dd","MM/dd/yy","dd-MM-yy","dd-MM-yyyy","yyyy/MM/dd","dd/MM/yyyy")
val newDF = df.withColumn("ETD", coalesce(dt_formats.map(fmt => to_date($"ETD", fmt)):_*))

Creating and using Spark-Hive UDF for Date

Note: this question is linked from this question: Creting UDF function with NonPrimitive Data Type and using in Spark-sql Query: Scala
I have created a method in scala:
package test.udf.demo
object UDF_Class {
def transformDate( dateColumn: String, df: DataFrame) : DataFrame = {
val sparksession = SparkSession.builder().appName("App").getOrCreate()
val d=df.withColumn("calculatedCol", month(to_date(from_unixtime(unix_timestamp(col(dateColumn), "dd-MM-yyyy")))))
df.withColumn("date1", when(col("calculatedCol") === "01", concat(concat(year(to_date(from_unixtime(unix_timestamp(col("calculatedCol"), "dd-MM- yyyy"))))-1, lit('-')),substring(year(to_date(from_unixtime(unix_timestamp(col("calculatedCol")), "dd-MM- yyyy"))),3,4))
.when(col("calculatedCol") === "02",concat(concat(year(to_date(from_unixtime(unix_timestamp(col("calculatedCol"), "dd-MM- yyyy"))))-1, lit('-')),substring(year(to_date(from_unixtime(unix_timestamp(col("calculatedCol")), "dd-MM- yyyy"))),3,4)))
.when(col("calculatedCol") === "03",concat(concat(year(to_date(from_unixtime(unix_timestamp(col("calculatedCol"), "dd-MM- yyyy"))))-1, lit('-')),substring(year(to_date(from_unixtime(unix_timestamp(col("calculatedCol")), "dd-MM-yyyy"))),3,4)))
.otherwise(concat(concat(year(to_date(from_unixtime(unix_timestamp(col("calculatedCol"), "dd-MM- yyyy")))), lit('-')), substring(year(to_date(from_unixtime(unix_timestamp(col("calculatedCol"), "dd-MM-yyyy")))) + 1, 3, 4)))))
val d1=sparksession.udf.register("transform",transformDate _)
d
}
}
I want to use this transformDate method in my sparksql query which is separate scala code in same package.
package test.udf.demo
import test.udf.demo.transformDate
//sparksession
sparksession.sql("select id,name,salary,transform(dob) from dbname.tablename")
but I get an error
not a temp or permanent registered function in default database
Can someone please guide me?
AFAIK Spark user defined udfs can cannot accept or return DataFrame. That is stopping your udf from registration
First of all Spark SQL UDF is a Row based function. Not a Dataframe based method. Aggregate UDF also takes a series of Row. So the UDF definition is wrong. If I understood your requirement correctly you want to create a configurable expression of Case statements. It can be easily achieved by expr()
import spark.implicits._
val exprStr = "case when calculatedCol='01' then <here goes your code statements> as FP"
val modifiedDf = sql("""select id,name,salary,$exprStr from dbname.tablename""")
It will work

Spark UDF - Return type DataFrame

I have created a UDF which will add a column flag in DataFrame and return new dataFrame.
def find_mismatch = udf((df: DataFrame) => {
df.withColumn("Flag",when(df("T_RTR_NUM").isNull && df("P_RTR_NUM").isNull ,
"Present in Flex but missing Trn and Platform"))
}
)
I am able to create UDF but when I pass a DataFrame into this , it gets errored out.
It works with normal function but when it comes to Spark UDF , it gets errored out.
Also, help me in understanding what difference will it make If I use normal function instead of spark UDF.
Please help. I have attached screenshot of code.
You can't pass a DataFrame to a UDF as a DataFrame is handled by a spark context i.e. at the driver and you can't pass that along to a UDF which runs on the different executors (and only hold a fraction of a dataframe)
Specifically about the problem you're trying to solve - as mentioned by #Manoj you don't actually need to use a UDF to get the result you need
You can do this without udf like below
import org.apache.spark.sql.Dataset
import org.apache.spark.sql.Row
def findMismatch(df:Dataset[Row]):Dataset[Row]={
val transDF=df.withColumn("Flag",when(df("T_RTR_NUM").isNull && df("P_RTR_NUM").isNull ,"Present in Flex but missing Trn and Platform"))
transDF
}
val transDF=findMismatch(df)

udf spark column names

I need to specify a sequence of columns. If I pass two strings, it works fine
val cols = array("predicted1", "predicted2")
but if I pass a sequence or an array, I get an error:
val cols = array(Seq("predicted1", "predicted2"))
Could you please help me? Many thanks!
You have at least two options here:
Using a Seq[String]:
val columns: Seq[String] = Seq("predicted1", "predicted2")
array(columns.head, columns.tail: _*)
Using a Seq[ColumnName]:
val columns: Seq[ColumnName] = Seq($"predicted1", $"predicted2")
array(columns: _*)
Function signature is def array(colName: String, colNames: String*): Column which means that it takes one string and then one or more strings. If you want to use a sequence, do it like this:
array("predicted1", Seq("predicted2"):_*)
From what I can see in the code, there are a couple of overloaded versions of this function, but neither one takes a Seq directly. So converting it into varargs as described should be the way to go.
You can use Spark's array form def array(cols: Column*): Column where the cols val is defined without using the $ column name notation -- i.e. when you want to have a Seq[ColumnName] type specifically, but created using strings. Here is how to solve that...
import org.apache.spark.sql.ColumnName
import sqlContext.implicits._
import org.apache.spark.sql.functions._
val some_states: Seq[String] = Seq("state_AK","state_AL","state_AR","state_AZ")
val some_state_cols: Seq[ColumnName] = some_states.map(s => symbolToColumn(scala.Symbol(s)))
val some_array = array(some_state_cols: _*)
...using Spark's symbolToColumn method.
or with the ColumnName(s) constructor directly.
val some_array: Seq[ColumnName] = some_states.map(s => new ColumnName(s))