I have data in hive table in the below format.
2019-11-21 18:19:15.817
I wrote a sql query as below to get the above column value into epoch format.
val newDF = spark.sql(f"""select TRIM(id) as ID, unix_timestamp(sig_ts) as SIG_TS from table""")
And I am getting the output column SIG_TS as 1574360296 which is not having milliseconds.
How to get the epoch timestamp of a date with milliseconds?
Simple way: Create an UDF since spark's built-in function truncates at seconds.
import java.sql.Timestamp
val fullTimestampUDF = udf{t: Timestamp => t.getTime}
val df = Seq("2019-11-21 18:19:15.817").toDF("sig_ts")
.withColumn("sig_ts_ut", unix_timestamp($"sig_ts"))
.withColumn("sig_ts_ut_long", fullTimestampUDF($"sig_ts"))
df.show(false)
+-----------------------+----------+--------------+
|sig_ts |sig_ts_ut |sig_ts_ut_long|
+-----------------------+----------+--------------+
|2019-11-21 18:19:15.817|1574356755|1574356755817 |
+-----------------------+----------+--------------+
Related
Very Simple question - Need to convert timestamp column in spark dataframe to java.time.Instant format
Here you can convert to java.time.instant:
val time1 = spark
.sql("...")
.as[java.sql.Timestamp]
.first()
.toInstant
I am receiving time data into my source as a csv file in the format (HHMMSSHS). I am not sure about what HS in the format stands for. example data will be like 15110708.
I am creating table in databricks table with received columns and data. I want to convert this field to time while processing in scala.
I am using UDF to do formating on any data on the go. But for this i am totally stuck while writing a UDF for parsing only time.
The final output should be 15:11:07:08 or any time format suitable for this string.
I tried with java.text.SimpleDateFormat and faced issue with unparsable string.
Is there any way to convert the above given string to a time format?
I am storing this value as acolumn in databricks notebook table. Is there any other format other than string to save only time values?
Have you tried?:
import java.time.LocalTime
val dtf : DateTimeFormatter = DateTimeFormatter.ofPattern("HHmmssSS")
val localTime = udf { str : String =>
LocalTime.parse(str, dtf).toString
}
that gives:
+---------+------------+
|Timestamp|converted |
+---------+------------+
|15110708 |15:11:07.080|
|15110708 |15:11:07.080|
+---------+------------+
I have a column in spark dataframe of timestamp type with date format like '2019-06-13T11:39:10.244Z'
My goal is to convert this column into EST time(subtracting 4 hours) keeping the same format.
I tried it using from_utc_timestamp api but it seems it is converting the UTC time to my local timezone (+5:30) and adding it to the timestamp then subtracting 4 hours from it. I tried to use Joda time but for some reason it is adding 33 days to the EST time
innput = 2019-06-13T11:39:10.244Z
using from_utc_timestamp api:
val tDf = df.withColumn("newTimeCol", to_utc_timestamp(col("timeCol"), "America/New_York"))
output = 2019-06-13T13:09:10.244Z+5:30
using Joda time package:
val coder : (String => String) = (arg: String) => {
new DateTime(arg, DateTimeZone.UTC).minusHours(4).toString("yyyy-mm-dd'T'HH:mm:s.SS'Z'")}
val sqlfunc = udf(coder)
val tDf = df.withColumn("newTime", sqlfunc(col("_c20")))
output = 2019-39-13T07:39:10.244Z
desired output = 2019-06-13T07:39:10.244Z
Kindly advise how should I proceed. Thanks in advance
There is a typo in your format string when creating the output.
Your format string should be yyyy-MM-dd'T'HH:mm:s.SS'Z' but it is yyyy-mm-dd'T'HH:mm:s.SS'Z'.
mm is the format char for minutes while MM is the format char for the months. You can check all format chars here.
How do you convert a timestamp column to epoch seconds?
var df = sc.parallelize(Seq("2018-07-01T00:00:00Z")).toDF("date_string")
df = df.withColumn("timestamp", $"date_string".cast("timestamp"))
df.show(false)
DataFrame:
+--------------------+---------------------+
|date_string |timestamp |
+--------------------+---------------------+
|2018-07-01T00:00:00Z|2018-07-01 00:00:00.0|
+--------------------+---------------------+
If you have a timestamp you can cast it to a long to get the epoch seconds
df = df.withColumn("epoch_seconds", $"timestamp".cast("long"))
df.show(false)
DataFrame
+--------------------+---------------------+-------------+
|date_string |timestamp |epoch_seconds|
+--------------------+---------------------+-------------+
|2018-07-01T00:00:00Z|2018-07-01 00:00:00.0|1530403200 |
+--------------------+---------------------+-------------+
Use unix_timestamp from org.apache.spark.functions. It can a timestamp column or from a string column where it is possible to specify the format. From the documentation:
public static Column unix_timestamp(Column s)
Converts time string in format yyyy-MM-dd HH:mm:ss to Unix timestamp (in seconds), using the default timezone and the default locale, return null if fail.
public static Column unix_timestamp(Column s, String p)
Convert time string with given pattern (see http://docs.oracle.com/javase/tutorial/i18n/format/simpleDateFormat.html) to Unix time stamp (in seconds), return null if fail.
Use as follows:
import org.apache.spark.functions._
df.withColumn("epoch_seconds", unix_timestamp($"timestamp")))
or if the column is a string with other format:
df.withColumn("epoch_seconds", unix_timestamp($"date_string", "yyyy-MM-dd'T'HH:mm:ss'Z'")))
It can be easily done with unix_timestamp function in spark SQL like this:
spark.sql("SELECT unix_timestamp(inv_time) AS time_as_long FROM agg_counts LIMIT 10").show()
Hope this helps.
You can use the function unix_timestamp and cast it into any datatype.
Example:
val df1 = df.select(unix_timestamp($"date_string", "yyyy-MM-dd HH:mm:ss").cast(LongType).as("epoch_seconds"))
I have a function "toDate(v:String):Timestamp" that takes a string an converts it into a timestamp with the format "MM-DD-YYYY HH24:MI:SS.NS".
I make a udf of the function:
val u_to_date = sqlContext.udf.register("u_to_date", toDate_)
The issue happens when you apply the UDF to dataframes. The resulting dataframe will lose the last 3 nanoseconds.
For example when using the argument "0001-01-01 00:00:00.123456789"
The resulting dataframe will be in the format
[0001-01-01 00:00:00.123456]
I have even tried a dummy function that returns Timestamp.valueOf("1234-01-01 00:00:00.123456789"). When applying the udf of the dummy function, it will truncate the last 3 nanoseconds.
I have looked into the sqlContext conf and
spark.sql.parquet.int96AsTimestamp is set to True. (I tried when it's set to false)
I am at lost here. What is causing the truncation of the last 3 digits?
example
The function could be:
def date123(v: String): Timestamp = {
Timestamp.valueOf("0001-01-01 00:00:00.123456789")
}
It's just a dummy function that should return a timestamp with full nanosecond precision.
Then I would make a udf:
`val u_date123 = sqlContext.udf.register("u_date123", date123 _)`
example df:
val theRow =Row("blah")
val theRdd = sc.makeRDD(Array(theRow))
case class X(x: String )
val df = theRdd.map{case Row(s0) => X(s0.asInstanceOf[String])}.toDF()
If I apply the udf to the dataframe df with a string column, it will return a dataframe that looks like '[0001-01-01 00:00:00.123456]'
df.select(u_date123($"x")).collect.foreach(println)
I think I found the issue.
On spark 1.5.1, they changed the size of the timestamp datatype from 12 bytes to 8 bytes
https://fossies.org/diffs/spark/1.4.1_vs_1.5.0/sql/catalyst/src/main/scala/org/apache/spark/sql/types/TimestampType.scala-diff.html
I tested on spark 1.4.1, and it produces the full nanosecond precision.