Get start date & end date from the range of timestamp - scala

I have a dataframe in Spark (Scala) from a large csv file.
Dataframe is something like this
key| col1 | timestamp |
---------------------------------
1 | aa | 2019-01-01 08:02:05.1 |
1 | aa | 2019-09-02 08:02:05.2 |
1 | cc | 2019-12-24 08:02:05.3 |
2 | dd | 2013-01-22 08:02:05.4 |
I need to add two columns start_date & end_date something like this
key| col1 | timestamp | start date | end date |
---------------------------------+---------------------------------------------------
1 | aa | 2019-01-01 08:02:05.1 | 2017-01-01 08:02:05.1 | 2018-09-02 08:02:05.2 |
1 | aa | 2019-09-02 08:02:05.2 | 2018-09-02 08:02:05.2 | 2019-12-24 08:02:05.3 |
1 | cc | 2019-12-24 08:02:05.3 | 2019-12-24 08:02:05.3 | NULL |
2 | dd | 2013-01-22 08:02:05.4 | 2013-01-22 08:02:05.4 | NULL |
Here,
for each column "key", end_date is next timestamp for the same key. However, "end_date" for the latest date should be NULL.
What I tried so far:
I tried to use window function to calculate rank for each partition
something like this
var df = read_csv()
//copy timestamp to start_date
df = df
.withColumn("start_date", df.col("timestamp"))
//add null value to the end_date
df = df.withColumn("end_date", typedLit[Option[String]](None))
val windowSpec = Window.partitionBy("merge_key_column").orderBy("start_date")
df
.withColumn("rank", dense_rank()
.over(windowSpec))
.withColumn("max", max("rank").over(Window.partitionBy("merge_key_column")))
So far, I haven't got the desired output.

Use window lead function for this case.
Example:
val df=Seq((1,"aa","2019-01-01 08:02:05.1"),(1,"aa","2019-09-02 08:02:05.2"),(1,"cc","2019-12-24 08:02:05.3"),(2,"dd","2013-01-22 08:02:05.4")).toDF("key","col1","timestamp")
import org.apache.spark.sql.expressions._
import org.apache.spark.sql.functions._
import org.apache.spark.sql._
val df1=df.withColumn("start_date",col("timestamp"))
val windowSpec = Window.partitionBy("key").orderBy("start_date")
df1.withColumn("end_date",lead(col("start_date"),1).over(windowSpec)).show(10,false)
//+---+----+---------------------+---------------------+---------------------+
//|key|col1|timestamp |start_date |end_date |
//+---+----+---------------------+---------------------+---------------------+
//|1 |aa |2019-01-01 08:02:05.1|2019-01-01 08:02:05.1|2019-09-02 08:02:05.2|
//|1 |aa |2019-09-02 08:02:05.2|2019-09-02 08:02:05.2|2019-12-24 08:02:05.3|
//|1 |cc |2019-12-24 08:02:05.3|2019-12-24 08:02:05.3|null |
//|2 |dd |2013-01-22 08:02:05.4|2013-01-22 08:02:05.4|null |
//+---+----+---------------------+---------------------+---------------------+

Related

Subtract months to a date using Spark Scala

I am trying to subtract some months to a date. I have the following DF called df1 where MonthSub is always positive so I have to convert it maybe in negative to subtract the date:
+-------------+----------+
| Date | MonthSub |
+-------------+----------+
| 31/11/2020 | 12 |
| 25/07/2020 | 5 |
| 11/01/2020 | 1 |
+-------------+----------+
And I expect to get the following:
+-------------+----------+-------------+
| Date | MonthSub | Result |
+-------------+----------+-------------+
| 31/11/2020 | 12 | 31/11/2019 |
| 25/07/2020 | 5 | 25/02/2020 |
| 11/01/2020 | 1 | 11/12/2019 |
+-------------+----------+-------------+
Schema of DF1:
root
|-- Date: string (nullable = true)
|-- MonthSub: string (nullable = true)
What I am doing:
df1 = df1.withColumn("MonthSub", col("MonthSub").cast(IntegerType))
val dfMonth = df1.withColumn("Result", add_months(to_date(col("Date"), "dd-MM-yyyy"), col("MonthSub")))
But I constantly getting null values.
Are there other options to do this? or what am I doing wrong?
You can use add_months with negative months value as below
val dfMonth = df1.withColumn("Result", add_months(
to_date(col("Date"), "dd/MM/yyyy"), col("MonthSub") * lit(-1))
)
dfMonth.show(false)
Output:
+----------+--------+----------+
|Date |MonthSub|Result |
+----------+--------+----------+
|30/11/2020|12 |2019-11-30|
|25/07/2020|5 |2020-02-25|
|11/01/2020|1 |2019-12-11|
+----------+--------+----------+
You can change the date format as you like.

how to find which date the consecutive column status "Complete" started with in a 7day period

I need to get a date from below input on which there is a consecutive 'complete' status for past 7 days from that given date.
Requirement:
1. go Back 8 days (this is easy)
2. So we are on 20190111 from below data frame, I need to check day by day from 20190111 to 20190104 (7 day period) and get a date on which status has 'complete' for consecutive 7 days. So we should get 20190108
I need this in spark-scala.
input
+---+--------+--------+
| id| date| status|
+---+--------+--------+
| 1|20190101|complete|
| 2|20190102|complete|
| 3|20190103|complete|
| 4|20190104|complete|
| 5|20190105|complete|
| 6|20190106|complete|
| 7|20190107|complete|
| 8|20190108|complete|
| 9|20190109| pending|
| 10|20190110|complete|
| 11|20190111|complete|
| 12|20190112| pending|
| 13|20190113|complete|
| 14|20190114|complete|
| 15|20190115| pending|
| 16|20190116| pending|
| 17|20190117| pending|
| 18|20190118| pending|
| 19|20190119| pending|
+---+--------+--------+
output
+---+--------+--------+
| id| date| status|
+---+--------+--------+
| 1|20190101|complete|
| 2|20190102|complete|
| 3|20190103|complete|
| 4|20190104|complete|
| 5|20190105|complete|
| 6|20190106|complete|
| 7|20190107|complete|
| 8|20190108|complete|
output
+---+--------+--------+
| id| date| status|
+---+--------+--------+
| 1|20190101|complete|
| 2|20190102|complete|
| 3|20190103|complete|
| 4|20190104|complete|
| 5|20190105|complete|
| 6|20190106|complete|
| 7|20190107|complete|
| 8|20190108|complete|
for >= spark 2.4
import org.apache.spark.sql.expressions.Window
val df= Seq((1,"20190101","complete"),(2,"20190102","complete"),
(3,"20190103","complete"),(4,"20190104","complete"), (5,"20190105","complete"),(6,"20190106","complete"),(7,"20190107","complete"),(8,"20190108","complete"),
(9,"20190109", "pending"),(10,"20190110","complete"),(11,"20190111","complete"),(12,"20190112", "pending"),(13,"20190113","complete"),(14,"20190114","complete"),(15,"20190115", "pending") , (16,"20190116", "pending"),(17,"20190117", "pending"),(18,"20190118", "pending"),(19,"20190119", "pending")).toDF("id","date","status")
val df1= df.select($"id", to_date($"date", "yyyyMMdd").as("date"), $"status")
val win = Window.orderBy("id")
coalesce lag_status and status to remove null
val df2= df1.select($"*", lag($"status",1).over(win).as("lag_status")).withColumn("lag_stat", coalesce($"lag_status", $"status")).drop("lag_status")
create integer columns to denote if staus for current day is equal to status for previous days
val df3=df2.select($"*", ($"status"===$"lag_stat").cast("integer").as("status_flag"))
val win1= Window.orderBy($"id".desc).rangeBetween(0,7)
val df4= df3.select($"*", sum($"status_flag").over(win1).as("previous_7_sum"))
val df_new= df4.where($"previous_7_sum"===8).select($"date").select(explode(sequence(date_sub($"date",7), $"date")).as("date"))
val df5=df4.join(df_new, Seq("date"), "inner").select($"id", concat_ws("",split($"date".cast("string"), "-")).as("date"), $"status")
+---+--------+--------+
| id| date| status|
+---+--------+--------+
| 1|20190101|complete|
| 2|20190102|complete|
| 3|20190103|complete|
| 4|20190104|complete|
| 5|20190105|complete|
| 6|20190106|complete|
| 7|20190107|complete|
| 8|20190108|complete|
+---+--------+--------+
for spark < 2.4
use udf instead of built in array function "sequence"
val df1= df.select($"id", $"date".cast("integer").as("date"), $"status")
val win = Window.orderBy("id")
coalesce lag_status and status to remove null
val df2= df1.select($"*", lag($"status",1).over(win).as("lag_status")).withColumn("lag_stat", coalesce($"lag_status", $"status")).drop("lag_status")
create integer columns to denote if staus for current day is equal to status for previous days
val df3=df2.select($"*", ($"status"===$"lag_stat").cast("integer").as("status_flag"))
val win1= Window.orderBy($"id".desc).rangeBetween(0,7)
val df4= df3.select($"*", sum($"status_flag").over(win1).as("previous_7_sum"))
val ud1= udf((col1:Int) => {
((col1-7).to(col1 )).toArray})
val df_new= df4.where($"previous_7_sum"===8)
.withColumn("dt_arr", ud1($"date"))
.select(explode($"dt_arr" ).as("date"))
val df5=df4.join(df_new, Seq("date"), "inner").select($"id", concat_ws("",split($"date".cast("string"), "-")).as("date"), $"status")

How correctly to join 2 dataframe in Apache Spark?

I am new in Apache Spark and need some help. Can someone say how correctly to join next 2 dataframes?!
First dataframe:
| DATE_TIME | PHONE_NUMBER |
|---------------------|--------------|
| 2019-01-01 00:00:00 | 7056589658 |
| 2019-02-02 00:00:00 | 7778965896 |
Second dataframe:
| DATE_TIME | IP |
|---------------------|---------------|
| 2019-01-01 01:00:00 | 194.67.45.126 |
| 2019-02-02 00:00:00 | 102.85.62.100 |
| 2019-03-03 03:00:00 | 102.85.62.100 |
Final dataframe which I want:
| DATE_TIME | PHONE_NUMBER | IP |
|---------------------|--------------|---------------|
| 2019-01-01 00:00:00 | 7056589658 | |
| 2019-01-01 01:00:00 | | 194.67.45.126 |
| 2019-02-02 00:00:00 | 7778965896 | 102.85.62.100 |
| 2019-03-03 03:00:00 | | 102.85.62.100 |
Here below the code which I tried:
import org.apache.spark.sql.Dataset
import spark.implicits._
val df1 = Seq(
("2019-01-01 00:00:00", "7056589658"),
("2019-02-02 00:00:00", "7778965896")
).toDF("DATE_TIME", "PHONE_NUMBER")
df1.show()
val df2 = Seq(
("2019-01-01 01:00:00", "194.67.45.126"),
("2019-02-02 00:00:00", "102.85.62.100"),
("2019-03-03 03:00:00", "102.85.62.100")
).toDF("DATE_TIME", "IP")
df2.show()
val total = df1.join(df2, Seq("DATE_TIME"), "left_outer")
total.show()
Unfortunately, it raise error:
org.apache.spark.SparkException: Exception thrown in awaitResult:
at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:205)
at org.apache.spark.sql.execution.exchange.BroadcastExchangeExec.doExecuteBroadcast(BroadcastExchangeExec.scala:136)
at org.apache.spark.sql.execution.InputAdapter.doExecuteBroadcast(WholeStageCodegenExec.scala:367)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeBroadcast$1.apply(SparkPlan.scala:144)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeBroadcast$1.apply(SparkPlan.scala:140)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
at org.apache.spark.sql.execution.SparkPlan.executeBroadcast(SparkPlan.scala:140)
at org.apache.spark.sql.execution.joins.BroadcastHashJoinExec.prepareBroadcast(BroadcastHashJoinExec.scala:135)
...
You need to full outer join, but your code is good. Your issue might be some thing else, but with the stack trace you mentioned can't conclude what the issue is.
val total = df1.join(df2, Seq("DATE_TIME"), "full_outer")
You can do this:
val total = df1.join(df2, (df1("DATE_TIME") === df2("DATE_TIME")), "left_outer")

In Spark scala, how to check between adjacent rows in a dataframe

How can I check for the dates from the adjacent rows (preceding and next) in a Dataframe. This should happen at a key level
I have following data after sorting on key, dates
source_Df.show()
+-----+--------+------------+------------+
| key | code | begin_dt | end_dt |
+-----+--------+------------+------------+
| 10 | ABC | 2018-01-01 | 2018-01-08 |
| 10 | BAC | 2018-01-03 | 2018-01-15 |
| 10 | CAS | 2018-01-03 | 2018-01-21 |
| 20 | AAA | 2017-11-12 | 2018-01-03 |
| 20 | DAS | 2018-01-01 | 2018-01-12 |
| 20 | EDS | 2018-02-01 | 2018-02-16 |
+-----+--------+------------+------------+
When the dates are in a range from these rows (i.e. the current row begin_dt falls in between begin and end dates of the previous row), I need to have the lowest begin date on all such rows and the highest end date.
Here is the output I need..
final_Df.show()
+-----+--------+------------+------------+
| key | code | begin_dt | end_dt |
+-----+--------+------------+------------+
| 10 | ABC | 2018-01-01 | 2018-01-21 |
| 10 | BAC | 2018-01-01 | 2018-01-21 |
| 10 | CAS | 2018-01-01 | 2018-01-21 |
| 20 | AAA | 2017-11-12 | 2018-01-12 |
| 20 | DAS | 2017-11-12 | 2018-01-12 |
| 20 | EDS | 2018-02-01 | 2018-02-16 |
+-----+--------+------------+------------+
Appreciate any ideas to achieve this. Thanks in advance!
Here's one approach:
Create new column group_id with null value if begin_dt is within date range from the previous row; otherwise a unique integer
Backfill nulls in group_id with the last non-null value
Compute min(begin_dt) and max(end_dt) within each (key, group_id) partition
Example below:
import org.apache.spark.sql.functions._
import org.apache.spark.sql.expressions.Window
val df = Seq(
(10, "ABC", "2018-01-01", "2018-01-08"),
(10, "BAC", "2018-01-03", "2018-01-15"),
(10, "CAS", "2018-01-03", "2018-01-21"),
(20, "AAA", "2017-11-12", "2018-01-03"),
(20, "DAS", "2018-01-01", "2018-01-12"),
(20, "EDS", "2018-02-01", "2018-02-16")
).toDF("key", "code", "begin_dt", "end_dt")
val win1 = Window.partitionBy($"key").orderBy($"begin_dt", $"end_dt")
val win2 = Window.partitionBy($"key", $"group_id")
df.
withColumn("group_id", when(
$"begin_dt".between(lag($"begin_dt", 1).over(win1), lag($"end_dt", 1).over(win1)), null
).otherwise(monotonically_increasing_id)
).
withColumn("group_id", last($"group_id", ignoreNulls=true).
over(win1.rowsBetween(Window.unboundedPreceding, 0))
).
withColumn("begin_dt2", min($"begin_dt").over(win2)).
withColumn("end_dt2", max($"end_dt").over(win2)).
orderBy("key", "begin_dt", "end_dt").
show
// +---+----+----------+----------+-------------+----------+----------+
// |key|code| begin_dt| end_dt| group_id| begin_dt2| end_dt2|
// +---+----+----------+----------+-------------+----------+----------+
// | 10| ABC|2018-01-01|2018-01-08|1047972020224|2018-01-01|2018-01-21|
// | 10| BAC|2018-01-03|2018-01-15|1047972020224|2018-01-01|2018-01-21|
// | 10| CAS|2018-01-03|2018-01-21|1047972020224|2018-01-01|2018-01-21|
// | 20| AAA|2017-11-12|2018-01-03| 455266533376|2017-11-12|2018-01-12|
// | 20| DAS|2018-01-01|2018-01-12| 455266533376|2017-11-12|2018-01-12|
// | 20| EDS|2018-02-01|2018-02-16| 455266533377|2018-02-01|2018-02-16|
// +---+----+----------+----------+-------------+----------+----------+

Scala - Spark - How can I get a new dataframe with distinct values of a dataframe column and the first date of this distinct values?

I have a Spark Dataframe with the following schema:
________________________
|id | no | date |
|1 | 123 |2018/10/01 |
|2 | 124 |2018/10/01 |
|3 | 123 |2018/09/28 |
|4 | 123 |2018/09/27 |
...
What I want is to have a new DataFrame with the following data:
___________________
| no | date |
| 123 |2018/09/27 |
| 124 |2018/10/01 |
Can someone help me on this?:) Thank you!!
You can resolve it by using the rank (https://databricks.com/blog/2015/07/15/introducing-window-functions-in-spark-sql.html) on dataframe with spark sql:
use registerTempTable on sparkContext such as df_temp_table
Make this query:
select dftt.*,
dense_rank() OVER ( PARTITION BY dftt.no ORDER BY dftt.date DESC) AS Rank from
df_temp_table as dftt
you will get this dataframe:
|id | no | date | rank
|1 | 123 |2018/10/01 | 1
|2 | 124 |2018/10/01 | 1
|3 | 123 |2018/09/28 | 2
|4 | 123 |2018/09/27 | 3
on this df you can now filter the rank column by 1
Welcome,
you can try below Code :
import org.apache.spark.sql.functions.row_number
import org.apache.spark.sql.expressions.Window
val w = Window.partitionBy($"no").orderBy($"date".asc)
val Resultdf = df.withColumn("rownum", row_number.over(w))
.where($"rownum" === 1).drop("rownum","id")
Resultdf.show()
Output:
+---+----------+
| no| date|
+---+----------+
|124|2018/10/01|
|123|2018/09/27|
+---+----------+