spark dataframe aggregation of column based on condition in scala - scala

I have csv data a following in following format.
I need to find top 2 vendor whose turnover is greater than 100 in year 2017.
Turnover= Sum(Invoices whose status is Paid-in-Full ) - Sum(Invoices
whose status is Exception or Rejected)
I have loaded the data from csv in datebricks scala notebook as follow:
val invoices_data = spark.read.format(file_type)
.option("header", "true")
.option("dateFormat", "M/d/yy")
.option("inferSchema", "true")
.load("invoice.csv")
Then I tried to make group by vendor name
val avg_invoice_by_vendor = invoices_data.groupBy("VendorName")
But Now I don't know how to proceed further.
Here is sample csv data.
Id InvoiceDate Status Invoice VendorName
2 2/23/17 Exception 23 V1
3 11/23/17 Paid-in-Full 56 V1
1 12/20/17 Paid-in-Full 12 V1
5 8/4/19 Paid-in-Full 123 V2
6 2/6/17 Paid-in-Full 237 V2
9 3/9/17 Rejected 234 V2
7 4/23/17 Paid-in-Full 78 V3
8 5/23/17 Exception 345 V4

You can use udf for sign invoice depends on status and after grouping aggregate df using sum function:
import org.apache.spark.sql.functions._
import org.apache.spark.sql.types.DateType
def signInvoice: (String, Int) => Int = (status: String, invoice: Int) => {
status match {
case "Exception" | "Rejected" => -invoice
case "Paid-in-Full" => invoice
case _ => throw new IllegalStateException("wrong status")
}
}
val signInvoiceUdf = spark.udf.register("signInvoice", signInvoice)
val top2_vendorsDF = invoices_data
.withColumn("InvoiceDate", col("InvoiceDate").cast(DateType))
.filter(year(col("InvoiceDate")) === lit(2017))
.withColumn("Invoice", col("Invoice").as[Int])
.groupBy("VendorName")
.agg(sum(signInvoiceUdf('Status, 'Invoice)).as("sum_invoice"))
.filter(col("sum_invoice") > 100)
.orderBy(col("sum_invoice").desc)
.take(2)

I have used pivot method to solve above issue.
invoices_data
.filter(invoices_data("InvoiceStatusDesc") === "Paid-in-Full" ||
invoices_data("InvoiceStatusDesc") === "Exception" ||
invoices_data("InvoiceStatusDesc") === "Rejected")
.filter(year(to_date(invoices_data("InvoiceDate"), "M/d/yy")) === 2017)
.groupBy("InvoiceVendorName").pivot("InvoiceStatusDesc").sum("InvoiceTotal")

Related

Filter a scala dataset on a Option[timestamp] column to return dates that are within n days of current date

Lets say I have the following dataset called customers
lastVisit
id
2018-08-08 12:23:43.234
11
2021-12-08 14:13:45.4
12
And the lastVisit field is of type Option[Timestamp]
I want to be able to perform the following...
val filteredCustomers = customers.filter($"lastVisit" > current date - x days)
so that I return all the customers that have a lastVisit date within the last x days.
This is what I have tried so far.
val timeFilter: Timestamp => Long = input => {
val sdf = new SimpleDateFormat("yyyy-mm-dd")
val visitDate = sdf.parse(input.toString).toInstant.atZone(ZoneId.systemDefault()).toLocalDate
val dateNow = LocalDate.now()
ChronoUnit.DAYS.between(visitDate, dateNow)
}
val timeFilterUDF = udf(timeFilter)
val filteredCustomers = customers.withColumn("days", timeFilteredUDF(col("lastVisit")))
val filteredCustomers2 = filteredCustomers.filter($"days" < n)
This runs locally but when I submit it as a spark job to run on the full table I got a null pointer exception in the following line
val visitDate = sdf.parse(input.toString).toInstant.atZone(ZoneId.systemDefault()).toLocalDate
val dateNow = LocalDate.now()
The data looks good so I am unsure what the problem could be, I also imagine there is a much better way to implement the logic I am trying to do, any advice would be greatly appreciated!
Thank you
#Xaleate, Based on your query, seems like you want to achieve a logic of
current_date - lastVisits < x days
Did you try using the datediff UDF already available in spark? here is a two line solution using datediff
import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.functions._
object LastDateIssue {
val spark: SparkSession = SparkSession.builder().appName("Last Date issue").master("local[*]").getOrCreate()
def main(args: Array[String]): Unit = {
import spark.implicits._
//prepare customer data for test
var customers = Map(
"2018-08-08 12:23:43.234"-> 11,
"2021-12-08 14:13:45.4"-> 12,
"2022-02-01 14:13:45.4"-> 13)
.toSeq
.toDF("lastVisit", "id")
// number of days
val x: Int = 10
customers = customers.filter(datediff(lit(current_date()), col("lastVisit")) < x)
customers.show(20, truncate = false)
}
}
This returns id = 13 as that is within the last 10 days (you could chose x accordingly)
+---------------------+---+
|lastVisit |id |
+---------------------+---+
|2022-02-01 14:13:45.4|13 |
+---------------------+---+
Use date_sub function.
df.filter($"lastVisit" > date_sub(current_date(),n)).show(false)

How to collect and process column-wise data in Spark

I have a dataframe contains 7 days, 24 hours data, so it has 144 columns.
id d1h1 d1h2 d1h3 ..... d7h24
aaa 21 24 8 ..... 14
bbb 16 12 2 ..... 4
ccc 21 2 7 ..... 6
what I want to do, is to find the max 3 values for each day:
id d1 d2 d3 .... d7
aaa [22,2,2] [17,2,2] [21,8,3] [32,11,2]
bbb [32,22,12] [47,22,2] [31,14,3] [32,11,2]
ccc [12,7,4] [28,14,7] [11,2,1] [19,14,7]
import org.apache.spark.sql.functions._
var df = ...
val first3 = udf((list : Seq[Double]) => list.slice(0,3))
for (i <- 1 until 7) {
val columns = (1 until 24).map(x=> "d"+i+"h"+x)
df = df
.withColumn("d"+i, first3(sort_array(array(columns.head, columns.tail :_*), false)))
.drop(columns :_*)
}
This should give you what you want. In fact for each day I aggregate the 24 hours into an array column, that I sort in desc order and from which I select the first 3 elements.
Define pattern:
val p = "^(d[1-7])h[0-9]{1,2}$".r
Group columns:
import org.apache.spark.sql.functions._
val cols = df.columns.tail
.groupBy { case p(d) => d }
.map { case (c, cs) => {
val sorted = sort_array(array(cs map col: _*), false)
array(sorted(0), sorted(1), sorted(2)).as(c)
}}
And select:
df.select($"id" +: cols.toSeq: _*)

Adding columns in Spark dataframe based on rules

I have a dataframe df, which contains below data:
**customers** **product** **Val_id**
1 A 1
2 B X
3 C
4 D Z
i have been provided 2 rules, which are as below:
**rule_id** **rule_name** **product value** **priority**
123 ABC A,B 1
456 DEF A,B,D 2
Requirement is to apply these rules on dataframe df in priority order, customers who have passed rule 1, should not be considered for rule 2 and in final dataframe add two more columns rule_id and rule_name, i have written below code to achieve it:
val rule_name = when(col("product").isin("A","B"), "ABC").otherwise(when(col("product").isin("A","B","D"), "DEF").otherwise(""))
val rule_id = when(col("product").isin("A","B"), "123").otherwise(when(col("product").isin("A","B","D"), "456").otherwise(""))
val df1 = df_customers.withColumn("rule_name" , rule_name).withColumn("rule_id" , rule_id)
df1.show()
Final output looks like below:
**customers** **product** **Val_id** **rule_name** **rule_id**
1 A 1 ABC 123
2 B X ABC 123
3 C
4 D Z DEF 456
Is there any better way to achieve it, adding both columns by just going though entire dataset once instead of going through entire dataset twice?
Question : Is there any better way to achieve it, adding both columns
by just going though entire dataset once instead of going through
entire dataset twice?
Answer : you can have a Map return type in scala...
Limitation : This udf if you are using with With Column for example
column name is ruleIDandRuleName then you can use a single fuction
with Map data type or any acceptable data type of spark sql column.
Other wise you cant use the below mentioned approach
shown in the below example snippet
def ruleNameAndruleId = udf((product : String) => {
if(Seq("A", "B").contains(product)) Map("ruleName"->"ABC","ruleId"->"123")
else if(Seq("A", "B", "D").contains(product)) (Map("ruleName"->"DEF","ruleId"->"456")
else (Map("ruleName"->"","ruleId"->"") })
caller will be
df.withColumn("ruleIDandRuleName",ruleNameAndruleId(product here) ) // ruleNameAndruleId will return a map containing rulename and rule id
An alternative to your solution would be to use udf functions. Its almost similar to when function as both required serialization and deserialization. Its upto you to test which is faster and efficient.
def rule_name = udf((product : String) => {
if(Seq("A", "B").contains(product)) "ABC"
else if(Seq("A", "B", "D").contains(product)) "DEF"
else ""
})
def rule_id = udf((product : String) => {
if(Seq("A", "B").contains(product)) "123"
else if(Seq("A", "B", "D").contains(product)) "456"
else ""
})
val df1 = df_customers.withColumn("rule_name" , rule_name(col("product"))).withColumn("rule_id" , rule_id(col("product")))
df1.show()

Spark Scala. Using an external variables "dataframe" in a map

I have two dataframes,
val df1 = sqlContext.csvFile("/data/testData.csv")
val df2 = sqlContext.csvFile("/data/someValues.csv")
df1=
startTime name cause1 cause2
15679 CCY 5 7
15683 2 5
15685 1 9
15690 9 6
df2=
cause description causeType
3 Xxxxx cause1
1 xxxxx cause1
3 xxxxx cause2
4 xxxxx
2 Xxxxx
and I want to apply a complex function getTimeCust to both cause1 and cause2 to determine a final cause, then match the description of this final cause code in df2. I must have a new df (or rdd) with the following columns:
startTime name cause descriptionCause
My solution was
val rdd2 = df1.map(row => {
val (cause, descriptionCause) = getTimeCust(row.getInt(2), row.getInt(3), df2)
Row (row(0),row(1),cause,descriptionCause)
})
If a run the code below I have a NullPointerException because the df2 is not visible.
The function getTimeCust(Int, Int, DataFrame) works well outside the map.
Use df1.join(df2, <join condition>) to join your dataframes together then select the fields you need from the joined dataframe.
You can't use spark's distributed structures (rdd, dataframe, etc) in code that runs on an executor (like inside a map).
Try something like this:
def f1(cause1: Int, cause2: Int): Int = some logic to calculate cause
import org.apache.spark.sql.functions.udf
val dfCause = df1.withColumn("df1_cause", udf(f1)($"cause1", $"cause2"))
val dfJoined = dfCause.join(df2, on= df1Cause("df1_cause")===df2("cause"))
dfJoined.select("cause", "description").show()
Thank you #Assaf. Thanks to your answer and the spark udf with data frame. I have resolved the this problem. The solution is:
val getTimeCust= udf((cause1: Any, cause2: Any) => {
var lastCause = 0
var categoryCause=""
var descCause=""
lastCause= .............
categoryCause= ........
(lastCause, categoryCause)
})
and after call the udf as:
val dfWithCause = df1.withColumn("df1_cause", getTimeCust( $"cause1", $"cause2"))
ANd finally the join
val dfFinale=dfWithCause.join(df2, dfWithCause.col("df1_cause._1") === df2.col("cause") and dfWithCause.col("df1_cause._2") === df2.col("causeType"),'outer' )

How to zip two (or more) DataFrame in Spark

I have two DataFrame a and b.
a is like
Column 1 | Column 2
abc | 123
cde | 23
b is like
Column 1
1
2
I want to zip a and b (or even more) DataFrames which becomes something like:
Column 1 | Column 2 | Column 3
abc | 123 | 1
cde | 23 | 2
How can I do it?
Operation like this is not supported by a DataFrame API. It is possible to zip two RDDs but to make it work you have to match both number of partitions and number of elements per partition. Assuming this is the case:
import org.apache.spark.sql.DataFrame
import org.apache.spark.sql.Row
import org.apache.spark.sql.types.{StructField, StructType, LongType}
val a: DataFrame = sc.parallelize(Seq(
("abc", 123), ("cde", 23))).toDF("column_1", "column_2")
val b: DataFrame = sc.parallelize(Seq(Tuple1(1), Tuple1(2))).toDF("column_3")
// Merge rows
val rows = a.rdd.zip(b.rdd).map{
case (rowLeft, rowRight) => Row.fromSeq(rowLeft.toSeq ++ rowRight.toSeq)}
// Merge schemas
val schema = StructType(a.schema.fields ++ b.schema.fields)
// Create new data frame
val ab: DataFrame = sqlContext.createDataFrame(rows, schema)
If above conditions are not met the only option that comes to mind is adding an index and join:
def addIndex(df: DataFrame) = sqlContext.createDataFrame(
// Add index
df.rdd.zipWithIndex.map{case (r, i) => Row.fromSeq(r.toSeq :+ i)},
// Create schema
StructType(df.schema.fields :+ StructField("_index", LongType, false))
)
// Add indices
val aWithIndex = addIndex(a)
val bWithIndex = addIndex(b)
// Join and clean
val ab = aWithIndex
.join(bWithIndex, Seq("_index"))
.drop("_index")
In Scala's implementation of Dataframes, there is no simple way to concatenate two dataframes into one. We can simply work around this limitation by adding indices to each row of the dataframes. Then, we can do a inner join by these indices. This is my stub code of this implementation:
val a: DataFrame = sc.parallelize(Seq(("abc", 123), ("cde", 23))).toDF("column_1", "column_2")
val aWithId: DataFrame = a.withColumn("id",monotonicallyIncreasingId)
val b: DataFrame = sc.parallelize(Seq((1), (2))).toDF("column_3")
val bWithId: DataFrame = b.withColumn("id",monotonicallyIncreasingId)
aWithId.join(bWithId, "id")
A little light reading - Check out how Python does this!
What about pure SQL ?
SELECT
room_name,
sender_nickname,
message_id,
row_number() over (partition by room_name order by message_id) as message_index,
row_number() over (partition by room_name, sender_nickname order by message_id) as user_message_index
from messages
order by room_name, message_id
I know the OP was using Scala but if, like me, you need to know how to do this in pyspark then try the Python code below. Like #zero323's first solution it relies on RDD.zip() and will therefore fail if both DataFrames don't have the same number of partitions and the same number of rows in each partition.
from pyspark.sql import Row
from pyspark.sql.types import StructType
def zipDataFrames(left, right):
CombinedRow = Row(*left.columns + right.columns)
def flattenRow(row):
left = row[0]
right = row[1]
combinedVals = [left[col] for col in left.__fields__] + [right[col] for col in right.__fields__]
return CombinedRow(*combinedVals)
zippedRdd = left.rdd.zip(right.rdd).map(lambda row: flattenRow(row))
combinedSchema = StructType(left.schema.fields + right.schema.fields)
return zippedRdd.toDF(combinedSchema)
joined = zipDataFrames(a, b)