Does unix_timestamp truncate or round milliseconds? - pyspark

From the reference:
Convert time string with given pattern (‘yyyy-MM-dd HH:mm:ss’, by default) to Unix time stamp (in seconds), using the default timezone and the default locale, return null if fail.
I find that this drops milliseconds off DataFrame timestamp columns. I am just wondering whether it simply truncates, or rounds the timestamp to the nearest second.

No documentation back up but in #spark 2.2.0, it's truncation, here is a demo:
from pyspark.sql import Row
import pyspark.sql.functions as F
r = Row('datetime')
lst = [r('2017-10-29 10:20:30.102'), r('2017-10-29 10:20:30.999')]
df = spark.createDataFrame(lst)
(df.withColumn('trunc_datetime', F.unix_timestamp(F.col('datetime')))
.withColumn('seconds', F.from_unixtime(F.col('trunc_datetime'), 'ss'))
.show(2, False))
+-----------------------+--------------+-------+
|datetime |trunc_datetime|seconds|
+-----------------------+--------------+-------+
|2017-10-29 10:20:30.102|1509286830 |30 |
|2017-10-29 10:20:30.999|1509286830 |30 |
+-----------------------+--------------+-------+

Related

Spark DataFrame convert milliseconds timestamp column in string format to human readable time with milliseconds

I have a Spark DataFrame with a timestamp column in milliseconds since the epoche. The column is a string. I now want to transform the column to a readable human time but keep the milliseconds.
For example:
1614088453671 -> 23-2-2021 13:54:13.671
Every example i found transforms the timestamp to a normal human readable time without milliseconds.
What i have:
+------------------+
|epoch_time_seconds|
+------------------+
|1614088453671 |
+------------------+
What i want to reach:
+------------------+------------------------+
|epoch_time_seconds|human_date |
+------------------+------------------------+
|1614088453671 |23-02-2021 13:54:13.671 |
+------------------+------------------------+
The time before the milliseconds can be obtained using date_format from_unixtime, while the milliseconds can be obtained using a modulo. Combine them using format_string.
val df2 = df.withColumn(
"human_date",
format_string(
"%s.%s",
date_format(
from_unixtime(col("epoch_time_seconds")/1000),
"dd-MM-yyyy HH:mm:ss"
),
col("epoch_time_seconds") % 1000
)
)
df2.show(false)
+------------------+-----------------------+
|epoch_time_seconds|human_date |
+------------------+-----------------------+
|1614088453671 |23-02-2021 13:54:13.671|
+------------------+-----------------------+

Filtering dataframe spark scala for dates greater than current time

I have a data frame in spark 1.6 that I would like to select all rows greater than the current time. I am filtering on "time_occurred" column with this type of format "yyyy-MM-dd'T'HH:mm:ss.SSS". I was wondering what the best way is to achieve this?
Best way would be casting the field to timestamp type by using Regexp_replace function to replace 'T'.
Then by using current_timestamp function we can filter out data in the dataframe.
Example:
Spark-scala-1.6:
import sqlContext.implicits._
import org.apache.spark.sql.functions._
import org.apache.spark.sql.types._
//sample data
val df=sc.parallelize(Seq(("2019-10-17'T'18:30:45.123"),("2019-10-15'T'18:30:45.123"))).toDF("ts")
df.filter(regexp_replace('ts,"'T'"," ").cast("timestamp") > current_timestamp).show(false)
Result:
+-------------------------+
|ts |
+-------------------------+
|2019-10-17'T'18:30:45.123|
+-------------------------+
In case if you need to replace 'T' to get timestamp type for ts field then use this approach.
df.withColumn("ts",regexp_replace('ts,"'T'"," ").cast("timestamp"))
.filter('ts > current_timestamp).show(false)
Result:
+-----------------------+
|ts |
+-----------------------+
|2019-10-17 18:30:45.123|
+-----------------------+
Result ts field will be having Timestamp type.

Why aggregation function pyspark.sql.functions.collect_list() adds local timezone offset on display?

I run the following code in a pyspark shell session. Running collect_list() after a groupBy, changes how timestamps are displayed (a UTC+02:00 offset is added, probably because this is the local offset at Greece where the code is run). Although the display is problematic, the timestamp under the hood remains unchanged. This can be observed either by adding a column with the actual unix timestamps or by reverting the dataframe to its initial shape through using pyspark.sql.functions.explode(). Is this a bug?
import datetime
import os
from pyspark.sql import functions, types, udf
# configure utc timezone
spark.conf.set("spark.sql.session.timeZone", "UTC")
os.environ['TZ']
time.tzset()
# create DataFrame
date_time = datetime.datetime(year = 2019, month=1, day=1, hour=12)
data = [(1, date_time), (1, date_time)]
schema = types.StructType([types.StructField("id", types.IntegerType(), False), types.StructField("time", types.TimestampType(), False)])
df_test = spark.createDataFrame(data, schema)
df_test.show()
+---+-------------------+
| id| time|
+---+-------------------+
| 1|2019-01-01 12:00:00|
| 1|2019-01-01 12:00:00|
+---+-------------------+
# GroupBy and collect_list
df_test1 = df_test.groupBy("id").agg(functions.collect_list("time"))
df_test1.show(1, False)
+---+----------------------------------------------+
|id |collect_list(time) |
+---+----------------------------------------------+
|1 |[2019-01-01 14:00:00.0, 2019-01-01 14:00:00.0]|
+---+----------------------------------------------+
# add column with unix timestamps
to_timestamp = functions.udf(lambda x : [value.timestamp() for value in x], types.ArrayType(types.FloatType()))
df_test1.withColumn("unix_timestamp",to_timestamp(functions.col("collect_list(time)")))
df_test1.show(1, False)
+---+----------------------------------------------+----------------------------+
|id |collect_list(time) |unix_timestamp |
+---+----------------------------------------------+----------------------------+
|1 |[2019-01-01 14:00:00.0, 2019-01-01 14:00:00.0]|[1.54634394E9, 1.54634394E9]|
+---+----------------------------------------------+----------------------------+
# explode list to distinct rows
df_test1.groupBy("id").agg(functions.collect_list("time")).withColumn("test", functions.explode(functions.col("collect_list(time)"))).show(2, False)
+---+----------------------------------------------+-------------------+
|id |collect_list(time) |test |
+---+----------------------------------------------+-------------------+
|1 |[2019-01-01 14:00:00.0, 2019-01-01 14:00:00.0]|2019-01-01 12:00:00|
|1 |[2019-01-01 14:00:00.0, 2019-01-01 14:00:00.0]|2019-01-01 12:00:00|
+---+----------------------------------------------+-------------------+
ps. 1.54634394E9 == 2019-01-01 12:00:00, which is the correct UTC timestamp
For me the code above works, but does not convert the time as in your case.
Maybe check what is your session time zone (and, optionally, set it to some tz):
spark.conf.get('spark.sql.session.timeZone')
In general TimestampType in pyspark is not tz aware like in Pandas rather it passes long ints and displays them according to your machine's local time zone (by default).

How to convert timestamp column to epoch seconds?

How do you convert a timestamp column to epoch seconds?
var df = sc.parallelize(Seq("2018-07-01T00:00:00Z")).toDF("date_string")
df = df.withColumn("timestamp", $"date_string".cast("timestamp"))
df.show(false)
DataFrame:
+--------------------+---------------------+
|date_string |timestamp |
+--------------------+---------------------+
|2018-07-01T00:00:00Z|2018-07-01 00:00:00.0|
+--------------------+---------------------+
If you have a timestamp you can cast it to a long to get the epoch seconds
df = df.withColumn("epoch_seconds", $"timestamp".cast("long"))
df.show(false)
DataFrame
+--------------------+---------------------+-------------+
|date_string |timestamp |epoch_seconds|
+--------------------+---------------------+-------------+
|2018-07-01T00:00:00Z|2018-07-01 00:00:00.0|1530403200 |
+--------------------+---------------------+-------------+
Use unix_timestamp from org.apache.spark.functions. It can a timestamp column or from a string column where it is possible to specify the format. From the documentation:
public static Column unix_timestamp(Column s)
Converts time string in format yyyy-MM-dd HH:mm:ss to Unix timestamp (in seconds), using the default timezone and the default locale, return null if fail.
public static Column unix_timestamp(Column s, String p)
Convert time string with given pattern (see http://docs.oracle.com/javase/tutorial/i18n/format/simpleDateFormat.html) to Unix time stamp (in seconds), return null if fail.
Use as follows:
import org.apache.spark.functions._
df.withColumn("epoch_seconds", unix_timestamp($"timestamp")))
or if the column is a string with other format:
df.withColumn("epoch_seconds", unix_timestamp($"date_string", "yyyy-MM-dd'T'HH:mm:ss'Z'")))
It can be easily done with unix_timestamp function in spark SQL like this:
spark.sql("SELECT unix_timestamp(inv_time) AS time_as_long FROM agg_counts LIMIT 10").show()
Hope this helps.
You can use the function unix_timestamp and cast it into any datatype.
Example:
val df1 = df.select(unix_timestamp($"date_string", "yyyy-MM-dd HH:mm:ss").cast(LongType).as("epoch_seconds"))

Aggregating JSON object in Dataframe and converting string timestamp to date

I got JSON rows that looks like the following
[{"time":"2017-03-23T12:23:05","user":"randomUser","action":"sleeping"}]
[{"time":"2017-03-23T12:24:05","user":"randomUser","action":"sleeping"}]
[{"time":"2017-03-23T12:33:05","user":"randomUser","action":"sleeping"}]
[{"time":"2017-03-23T15:33:05","user":"randomUser2","action":"eating"}]
[{"time":"2017-03-23T15:33:06","user":"randomUser2","action":"eating"}]
So I got 2 problem, First of all the time is stored as String inside my df, I believe it has to be date for me to aggregate them?
second of all, I need to aggregate those datas by 5 minutes interval,
just for example everything that happens from 2017-03-23T12:20:00 to 2017-03-23T12:24:59 need to be aggregated and considered as 2017-03-23T12:20:00 timestamp
expected output is
[{"time":"2017-03-23T12:20:00","user":"randomUser","action":"sleeping","count":2}]
[{"time":"2017-03-23T12:30:00","user":"randomUser","action":"sleeping","count":1}]
[{"time":"2017-03-23T15:30:00","user":"randomUser2","action":"eating","count":2}]
thanks
You can convert the StringType column into a TimestampType column using casting; Then, you can cast the timestamp into IntegerType to make the "rounding" down to the last 5-minute interval easier, and group by that (and all other columns):
// importing SparkSession's implicits
import spark.implicits._
// Use casting to convert String into Timestamp:
val withTime = df.withColumn("time", $"time" cast TimestampType)
// calculate the "most recent 5-minute-round time" and group by all
val result = withTime.withColumn("time", $"time" cast IntegerType)
.withColumn("time", ($"time" - ($"time" mod 60 * 5)) cast TimestampType)
.groupBy("time", "user", "action").count()
result.show(truncate = false)
// +---------------------+-----------+--------+-----+
// |time |user |action |count|
// +---------------------+-----------+--------+-----+
// |2017-03-23 12:20:00.0|randomUser |sleeping|2 |
// |2017-03-23 15:30:00.0|randomUser2|eating |2 |
// |2017-03-23 12:30:00.0|randomUser |sleeping|1 |
// +---------------------+-----------+--------+-----+