I parsed a string to a date:
val deathTime = "2019-03-14 05:22:45"
val dateFormat = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss")
val deathDate = new java.sql.Date(dateFormat.parse(deathtime).getTime)
Now, I want to subtract 30 days from deathDate. How to do that? I have tried
deathDate.minusDays(30)
It does not work.
If your requirement is to do with data-frame in spark-scala.
df.select(date_add(lit(current_date),-30)).show
+-----------------------------+
|date_add(current_date(), -30)|
+-----------------------------+
| 2019-03-02|
+-----------------------------+
date_add function with negative value or date_sub with positive value can do the desired.
If you are java8 then you can decode the date as LocalDateTime. LocalDateTime allows operations on dates - https://docs.oracle.com/javase/8/docs/api/java/time/LocalDateTime.html#minusDays-long-
scala> import java.time.LocalDateTime
import java.time.LocalDateTime
scala> import java.time.format.DateTimeFormatter
import java.time.format.DateTimeFormatter
scala> val deathTime = "2019-03-14 05:22:45"
deathTime: String = 2019-03-14 05:22:45
scala> val deathDate = LocalDateTime.parse(deathTime, DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm:ss"))
deathDate: java.time.LocalDateTime = 2019-03-14T05:22:45
scala> deathDate.minusDays(30)
res1: java.time.LocalDateTime = 2019-02-12T05:22:45
Also see Java: Easiest Way to Subtract Dates
Related
I'm trying to cast the column type to Timestamptype for which the value is in the format "11/14/2022 4:48:24 PM". However when I display the results I see the values as null.
Here is the sample code that I'm using to cast the timestamp field.
val messages = df.withColumn("Offset", $"Offset".cast(LongType))
.withColumn("Time(readable)", $"EnqueuedTimeUtc".cast(TimestampType))
.withColumn("Body", $"Body".cast(StringType))
.select("Offset", "Time(readable)", "Body")
display(messages)
4
Is there any other way I can try to avoid the null values?
Instead of casting to TimestampType, you can use to_timestamp function and provide the time format explicitly, like so:
import org.apache.spark.sql.types._
import org.apache.spark.sql.functions._
import spark.implicits._
val time_df = Seq((62536, "11/14/2022 4:48:24 PM"), (62537, "12/14/2022 4:48:24 PM")).toDF("Offset", "Time")
val messages = time_df
.withColumn("Offset", $"Offset".cast(LongType))
.withColumn("Time(readable)", to_timestamp($"Time", "MM/dd/yyyy h:mm:ss a"))
.select("Offset", "Time(readable)")
messages.show(false)
+------+-------------------+
|Offset|Time(readable) |
+------+-------------------+
|62536 |2022-11-14 16:48:24|
|62537 |2022-12-14 16:48:24|
+------+-------------------+
messages: org.apache.spark.sql.DataFrame = [Offset: bigint, Time(readable): timestamp]
One thing to remember, is that you will have to set one Spark configuration, to allow for legacy time parser policy:
spark.conf.set("spark.sql.legacy.timeParserPolicy", "LEGACY")
I want a dataframe to be reordered in ascending order based on a datetime column which is in the format of "23-07-2018 16:01"
My program sorts to date level but not HH:mm standard.I want output to include HH:mm details as well sorted according to it.
package com.spark
import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.DataFrame
import org.apache.spark.sql.functions.{to_date, to_timestamp}
import org.apache.spark.sql.functions._
import org.apache.spark.sql.types._
object conversion{
def main(args:Array[String]) = {
val spark = SparkSession.builder().master("local").appName("conversion").enableHiveSupport().getOrCreate()
import spark.implicits._
val sourceDF = spark.read.format("csv").option("header","true").option("inferSchema","true").load("D:\\2018_Sheet1.csv")
val modifiedDF = sourceDF.withColumn("CredetialEndDate",to_date($"CredetialEndDate","dd-MM-yyyy HH:mm"))
//This converts into "dd-MM-yyyy" but "dd-MM-yyyy HH:mm" is expected
//what is the equivalent Dataframe API to convert string to HH:mm ?
modifiedDF.createOrReplaceGlobalTempView("conversion")
val sortedDF = spark.sql("select * from global_temp.conversion order by CredetialEndDate ASC ").show(50)
//dd-MM-YYYY 23-07-2018 16:01
}
}
So my result should have the column in the format "23-07-2018 16:01" instead of just "23-07-2018" and having sorted ascending manner.
The method to_date converts the column into a DateType which has date only, no time. Try to use to_timestamp instead.
Edit: If you want to do the sorting but keep the original string representation you can do something like:
val modifiedDF = sourceDF.withColumn("SortingColumn",to_timestamp($"CredetialEndDate","dd-MM-yyyy HH:mm"))
and then modify the result to:
val sortedDF = spark.sql("select * from global_temp.conversion order by SortingColumnASC ").drop("SortingColumn").show(50)
I am getting the error:
org.apache.spark.sql.analysisexception: cannot resolve 'year'
My input data:
1,2012-07-21,2014-04-09
My code:
val sqlContext = new org.apache.spark.sql.SQLContext(sc)
import sqlContext.implicits._
import org.apache.spark.sql.SaveMode
import org.apache.spark.sql._
import org.apache.spark.sql.functions._
case class c (id:Int,start:String,end:String)
val c1 = sc.textFile("date.txt")
val c2 = c1.map(_.split(",")).map(r=>(c(r(0).toInt,r(1).toString,r(2).toString)))
val c3 = c2.toDF();
c3.registerTempTable("c4")
val r = sqlContext.sql("select id,datediff(year,to_date(end), to_date(start)) AS date from c4")
What can I do resolve above error?
I have tried the following code but I got the output in days and I need it in years
val r = sqlContext.sql("select id,datediff(to_date(end), to_date(start)) AS date from c4")
Please advise me if i can use any function like to_date to get year difference.
Another simple way to cast the string to dateType in spark sql and apply sql dates and time functions on the columns like following :
import org.apache.spark.sql.types._
val c4 = c3.select(col("id"),col("start").cast(DateType),col("end").cast(DateType))
c4.withColumn("dateDifference", datediff(col("end"),col("start")))
.withColumn("monthDifference", months_between(col("end"),col("start")))
.withColumn("yearDifference", year(col("end"))-year(col("start")))
.show()
One of the above answers doesn't return the right Year when days between two dates less than 365. Below example provides the right year and rounds the month and year to 2 decimal.
Seq(("2019-07-01"),("2019-06-24"),("2019-08-24"),("2018-12-23"),("2018-07-20")).toDF("startDate").select(
col("startDate"),current_date().as("endDate"))
.withColumn("datesDiff", datediff(col("endDate"),col("startDate")))
.withColumn("montsDiff", months_between(col("endDate"),col("startDate")))
.withColumn("montsDiff_round", round(months_between(col("endDate"),col("startDate")),2))
.withColumn("yearsDiff", months_between(col("endDate"),col("startDate"),true).divide(12))
.withColumn("yearsDiff_round", round(months_between(col("endDate"),col("startDate"),true).divide(12),2))
.show()
Outputs:
+----------+----------+---------+-----------+---------------+--------------------+---------------+
| startDate| endDate|datesDiff| montsDiff|montsDiff_round| yearsDiff|yearsDiff_round|
+----------+----------+---------+-----------+---------------+--------------------+---------------+
|2019-07-01|2019-07-24| 23| 0.74193548| 0.74| 0.06182795666666666| 0.06|
|2019-06-24|2019-07-24| 30| 1.0| 1.0| 0.08333333333333333| 0.08|
|2019-08-24|2019-07-24| -31| -1.0| -1.0|-0.08333333333333333| -0.08|
|2018-12-23|2019-07-24| 213| 7.03225806| 7.03| 0.586021505| 0.59|
|2018-07-20|2019-07-24| 369|12.12903226| 12.13| 1.0107526883333333| 1.01|
+----------+----------+---------+-----------+---------------+--------------------+---------------+
You can find a complete working example at below URL
https://sparkbyexamples.com/spark-calculate-difference-between-two-dates-in-days-months-and-years/
Hope this helps.
Happy Learning !!
val r = sqlContext.sql("select id,datediff(year,to_date(end), to_date(start)) AS date from c4")
In the above code, "year" is not a column in the data frame i.e it is not a valid column in table "c4" that is why analysis exception is thrown as query is invalid, query is not able to find the "year" column.
Use Spark User Defined Function (UDF), that will be a more robust approach.
Since dateDiff only returns the difference between days. I prefer to use my own UDF.
import java.sql.Timestamp
import java.time.Instant
import java.time.temporal.ChronoUnit
import org.apache.spark.sql.functions.{udf, col}
import org.apache.spark.sql.DataFrame
def timeDiff(chronoUnit: ChronoUnit)(dateA: Timestamp, dateB: Timestamp): Long = {
chronoUnit.between(
Instant.ofEpochMilli(dateA.getTime),
Instant.ofEpochMilli(dateB.getTime)
)
}
def withTimeDiff(dateA: String, dateB: String, colName: String, chronoUnit: ChronoUnit)(df: DataFrame): DataFrame = {
val timeDiffUDF = udf[Long, Timestamp, Timestamp](timeDiff(chronoUnit))
df.withColumn(colName, timeDiffUDF(col(dateA), col(dateB)))
}
Then I call it as a dataframe transformation.
df.transform(withTimeDiff("sleepTime", "wakeupTime", "minutes", ChronoUnit.MINUTES)
This is how my dataframe looks like at the moment
+------------+
| DATE |
+------------+
| 19931001|
| 19930404|
| 19930603|
| 19930805|
+------------+
I am trying to reformat this string value to yyyy-mm-dd hh:mm:ss.fff and keep it as a string not a date type or time stamp.
How would I do that using the withColumn method ?
Here is the solution using UDF and withcolumn, I have assumed that you have a string date field in Dataframe
//Create dfList dataframe
val dfList = spark.sparkContext
.parallelize(Seq("19931001","19930404", "19930603", "19930805")).toDF("DATE")
dfList.withColumn("DATE", dateToTimeStamp($"DATE")).show()
val dateToTimeStamp = udf((date: String) => {
val stringDate = date.substring(0,4)+"/"+date.substring(4,6)+"/"+date.substring(6,8)
val format = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss")
format.format(new SimpleDateFormat("yyy/MM/dd").parse(stringDate))
})
withClumn("date",
from_unixtime(unix_timestamp($"date", "yyyyMMdd"), "yyyy-MM-dd hh:mm:ss.fff") as "date")
this should work.
Another notice is the that mm gives minutes and MM gives months, hope this help you.
First, I created this DF:
val df = sc.parallelize(Seq("19931001","19930404","19930603","19930805")).toDF("DATE")
For date management we are going to use joda time Library (don't forget to join the joda-time.jar file)
import org.joda.time.format.DateTimeFormat
import org.joda.time.format.DateTimeFormatter
def func(s:String):String={
val dateFormat = DateTimeFormat.forPattern("yyyymmdd");
val resultDate = dateFormat.parseDateTime(s);
return resultDate.toString();
}
Finally, apply the function to dataframe:
val temp = df.map(l => func(l.get(0).toString()))
val df2 = temp.toDF("DATE")
df2.show()
This answer still needs some work, me myself is new to spark, but it is getting the job done, I think!
I would like to generate unique date range between current date to say like 2050.
val start_date = "2017-03-21"
val end_date = "2050-03-21"
Not sure how can i create a function for it. Any inputs here please. The difference between the start and end dates can be anything.
Unique date range means the function would never return me a date range which it has already returned.
I have this solution in mind:
val start_date = "2017-03-21"
val end_date = "2050-03-21"
while(start_date= "2017-03-21")
{
end_date = start_date+1
return( start_date, end_date)
}
start_date=start_date+1
We will use the java.time.LocalDate and temporal.ChronoUnit imports to achieve this:
scala> import java.time.LocalDate
import java.time.LocalDate
scala> import java.time.temporal.ChronoUnit
import java.time.temporal.ChronoUnit
scala> val startDate = LocalDate.parse("2017-03-21")
startDate: java.time.LocalDate = 2017-03-21
scala> val endDate = LocalDate.parse("2050-03-21")
endDate: java.time.LocalDate = 2050-03-21
scala> val dateAmount = 5
dateAmount: Int = 5
scala> val randomDates = List.fill(dateAmount) {
val randomAmt = ChronoUnit.DAYS.between(startDate, endDate) * math.random() // used to generate a random amount of days within given limits
startDate.plusDays(randomAmt.toInt) // returns a date from that random amount, will not go beyond endDate
}
randomDates: List[java.time.LocalDate] = List(2049-03-16, 2025-12-30, 2042-04-20, 2027-03-14, 2031-03-15)