How to convert unix timestamp to date in Spark - scala

I have a data frame with a column of unix timestamp(eg.1435655706000), and I want to convert it to data with format 'yyyy-MM-DD', I've tried nscala-time but it doesn't work.
val time_col = sqlc.sql("select ts from mr").map(_(0).toString.toDateTime)
time_col.collect().foreach(println)
and I got error:
java.lang.IllegalArgumentException: Invalid format: "1435655706000" is malformed at "6000"

Here it is using Scala DataFrame functions: from_unixtime and to_date
// NOTE: divide by 1000 required if milliseconds
// e.g. 1446846655609 -> 2015-11-06 21:50:55 -> 2015-11-06
mr.select(to_date(from_unixtime($"ts" / 1000)))

Since spark1.5 , there is a builtin UDF for doing that.
val df = sqlContext.sql("select from_unixtime(ts,'YYYY-MM-dd') as `ts` from mr")
Please check Spark 1.5.2 API Doc for more info.

import org.joda.time.{DateTime, DateTimeZone}
import org.joda.time.format.DateTimeFormat
You need to import the following libraries.
val stri = new DateTime(timeInMillisec).toString("yyyy/MM/dd")
Or adjusting to your case :
val time_col = sqlContext.sql("select ts from mr")
.map(line => new DateTime(line(0).toInt).toString("yyyy/MM/dd"))
There could be another way :
import com.github.nscala_time.time.Imports._
val date = (new DateTime() + ((threshold.toDouble)/1000).toInt.seconds )
.toString("yyyy/MM/dd")
Hope this helps :)

You needn't convert to String before applying toDataTime with nscala_time
import com.github.nscala_time.time.Imports._
scala> 1435655706000L.toDateTime
res4: org.joda.time.DateTime = 2015-06-30T09:15:06.000Z
`

I have solved this issue using the joda-time library by mapping on the DataFrame and converting the DateTime into a String :
import org.joda.time._
val time_col = sqlContext.sql("select ts from mr")
.map(line => new DateTime(line(0)).toString("yyyy-MM-dd"))

You can use the following syntax in Java
input.select("timestamp)
.withColumn("date", date_format(col("timestamp").$div(1000).cast(DataTypes.TimestampType), "yyyyMMdd").cast(DataTypes.IntegerType))

What you can do is:
input.withColumn("time", concat(from_unixtime(input.col("COL_WITH_UNIX_TIME")/1000,
"yyyy-MM-dd'T'HH:mm:ss"), typedLit("."), substring(input.col("COL_WITH_UNIX_TIME"), 11, 3),
typedLit("Z")))
where time is a new column name and COL_WITH_UNIX_TIME is the name of the column which you want to convert. This will give data in millis, making your data more accurate, like: "yyyy-MM-dd'T'HH:mm:ss.SSS'Z'"

Related

Getting error while trying to add a java date as literal in spark dataFrame

I have defined a variable like this in my scala notebook .
import java.time.{LocalDate, LocalDateTime, ZoneId, ZoneOffset, Duration}
val fiscalYearStartDate = LocalDate.of(fiscalStartYear,7,1);
I would like to add this as column to my dataFrame.
SomeDF.lit(fiscalYearStartDate ).cast("date").as("fiscalYearStartDate")
This is throwing an error .
java.lang.RuntimeException: Unsupported literal type class java.time.LocalDate 2020-10-01
Spark SQL DateType eqivalent in Scala is java.sql.Date and as result solution could be on of:
val finalDF = SomeDF.withColumn("fiscalYearStartDate", lit(fiscalYearStartDate.toString).cast("Date"))
or
val finalDF = SomeDF.withColumn("fiscalYearStartDate", lit(fiscalYearStartDate.format(DateTimeFormatter.ofPattern("yyyy-MM-dd")).cast("Date"))
or
import java.sql.Date
val finalDF = SomeDF.withColumn("fiscalYearStartDate", lit(Date.valueOf(fiscalYearStartDate)))

Update date format in spark dataframe for multiple spark columns

I have a Spark dataframe where few columns having a different type of date format.
To handle this I have written below code to keep a consistent type of format for all the date columns.
As the date column date format may get change every time hence I have defined a set of date formats in dt_formats.
def to_timestamp_multiple(s: Column, formats: Seq[String]): Column = {
coalesce(formats.map(fmt => to_timestamp(s, fmt)):_*)
}
val dt_formats= Seq("dd-MMM-yyyy", "MMM-dd-yyyy", "yyyy-MM-dd","MM/dd/yy","dd-MM-yy","dd-MM-yyyy","yyyy/MM/dd","dd/MM/yyyy")
val newDF = df.withColumn("ETD1", date_format(to_timestamp_multiple($"ETD",Seq("dd-MMM-yyyy", dt_formats)).cast("date"), "yyyy-MM-dd")).drop("ETD").withColumnRenamed("ETD1","ETD")
But here I have to create a new column then I have to drop older column then rename the new column.
that make the code unnecessary very clumsy hence I want to get override from this code.
I am trying to implement similar functionality by writing a Scala below function but it is throwing the exception org.apache.spark.sql.catalyst.parser.ParseException:, but I am unable to identify the what change I should made to make it work..
val CleansedData= rawDF.selectExpr(rawDF.columns.map(
x => { x match {
case "ETA" => s"""date_format(to_timestamp_multiple($x, dt_formats).cast("date"), "yyyy-MM-dd") as ETA"""
case _ => x
} } ) : _*)
Hence seeking help.
Thanks in advance.
Create a UDF in order to use with select. The select method takes columns and produces another DataFrame.
Also, instead of using coalesce, it might be more straightforward simply to build a parser that handles all of the formats. You can use DateTimeFormatterBuilder for this.
import java.time.format.DateTimeFormatter
import java.time.format.DateTimeFormatterBuilder
import org.apache.spark.sql.functions.udf
import java.time.LocalDate
import scala.util.Try
import java.sql.Date
val dtFormatStrings:Seq[String] = Seq("dd-MMM-yyyy", "MMM-dd-yyyy", "yyyy-MM-dd","MM/dd/yy","dd-MM-yy","dd-MM-yyyy","yyyy/MM/dd","dd/MM/yyyy")
// use foldLeft with appendOptional method, which for each format,
// returns a new builder with that additional possible format
val initBuilder = new DateTimeFormatterBuilder()
val builder: DateTimeFormatterBuilder = dtFormatStrings.foldLeft(initBuilder)(
(b: DateTimeFormatterBuilder, s:String) => b.appendOptional(DateTimeFormatter.ofPattern(s)))
val formatter = builder.toFormatter()
// Create the UDF, which just takes
// any function returning a sql-compatible type (java.sql.Date, here)
def toTimeStamp2(dateString:String): Date = {
val dateTry: Try[Date] = Try(java.sql.Date.valueOf(LocalDate.parse(dateString, formatter)))
dateTry.toOption.getOrElse(null)
}
val timeConversionUdf = udf(toTimeStamp2 _)
// example DF and new DF
val df = Seq(("05/08/20"), ("2020-04-03"), ("unparseable")).toDF("ETD")
df.select(timeConversionUdf(col("ETD"))).toDF("ETD2").show
Output:
+----------+
| ETD2|
+----------+
|2020-05-08|
|2020-04-03|
| null|
+----------+
Note that unparseable values end up null, as shown.
try withColumn(...) with same name and coalesce as below-
val dt_formats= Seq("dd-MMM-yyyy", "MMM-dd-yyyy", "yyyy-MM-dd","MM/dd/yy","dd-MM-yy","dd-MM-yyyy","yyyy/MM/dd","dd/MM/yyyy")
val newDF = df.withColumn("ETD", coalesce(dt_formats.map(fmt => to_date($"ETD", fmt)):_*))

java.lang.RuntimeException: Unsupported literal type class org.joda.time.DateTime

I work on a project where I use a library, which is very new to me, although I was using it in other projects, without any problems.
org.joda.time.DateTime
So I work with Scala, and run the project as a job on Databricks.
scalaVersion := "2.11.12"
The code, where the exception comes from - according to my investigation so far ^^ - is the following:
var lastEndTime = config.getState("some parameters")
val timespanStart: Long = lastEndTime // last query ending time
var timespanEnd: Long = (System.currentTimeMillis / 1000) - (60*840) // 14 hours ago
val start = new DateTime(timespanStart * 1000)
val end = new DateTime(timespanEnd * 1000)
val date = DateTime.now()
Where the getState() function returns 1483228800 as Long type value.
EDIT: I use the start and end dates in filtering while building a dataframe. I compare columns (timespan type) with these values!
val df2= df
.where(col("column_name").isNotNull)
.where(col("column_name") > start &&
col("column_name") <= end)
The error I get:
ERROR Uncaught throwable from user code: java.lang.RuntimeException:
Unsupported literal type class org.joda.time.DateTime
2017-01-01T00:00:00.000Z
I am not sure I actually understand how and why this is an error, so every kind of help is more than welcome!! Thank you a lot in advance!!
This is a common problem when people start to work with Spark SQL. Spark SQL has its own types and you need to work with them if you want to take advantage of the Dataframe API. In your example you can not compare a Dataframe column value using a Spark Sql function like "col" with a DateTime object directly unless you use an UDF.
If you want to make your comparison using the Spark sql functions you can take a look to this post where you can find differences using Dates and Timestamps with Spark Dataframes.
If you (for any reason) need to use Joda you will inevitably need to build your UDF:
import org.apache.spark.sql.DataFrame
import org.joda.time.DateTime
import org.joda.time.format.{DateTimeFormat, DateTimeFormatter}
object JodaFormater {
val formatter: DateTimeFormatter = DateTimeFormat.forPattern("dd/MM/yyyy HH:mm:ss")
}
object testJoda {
import org.apache.spark.sql.functions.{udf, col}
import JodaFormater._
def your_joda_compare_udf = (start: DateTime) => (end: DateTime) => udf { str =>
val dt: DateTime = formatter.parseDateTime(str)
dt.isAfter(start.getMillis) && dt.isBefore(start.getMillis)
}
def main(args: Array[String]) : Unit = {
val start: DateTime = ???
val end : DateTime = ???
// Your dataframe with your date as StringType
val df: DataFrame = ???
df.where(your_joda_compare_udf(start)(end)(col("your_date")))
}
}
Note that using this implementation implies some overhead(memory and GC) because the conversion from StringType to a Joda DateTime object so you should use the Spark SQL functions whenever you can. In some posts you can read that udfs are black boxes because Spark can not optimize their execution, but sometimes they help.

Extract date from unix_timestamp which is in string format Apache Spark? [duplicate]

I have a data frame with a column of unix timestamp(eg.1435655706000), and I want to convert it to data with format 'yyyy-MM-DD', I've tried nscala-time but it doesn't work.
val time_col = sqlc.sql("select ts from mr").map(_(0).toString.toDateTime)
time_col.collect().foreach(println)
and I got error:
java.lang.IllegalArgumentException: Invalid format: "1435655706000" is malformed at "6000"
Here it is using Scala DataFrame functions: from_unixtime and to_date
// NOTE: divide by 1000 required if milliseconds
// e.g. 1446846655609 -> 2015-11-06 21:50:55 -> 2015-11-06
mr.select(to_date(from_unixtime($"ts" / 1000)))
Since spark1.5 , there is a builtin UDF for doing that.
val df = sqlContext.sql("select from_unixtime(ts,'YYYY-MM-dd') as `ts` from mr")
Please check Spark 1.5.2 API Doc for more info.
import org.joda.time.{DateTime, DateTimeZone}
import org.joda.time.format.DateTimeFormat
You need to import the following libraries.
val stri = new DateTime(timeInMillisec).toString("yyyy/MM/dd")
Or adjusting to your case :
val time_col = sqlContext.sql("select ts from mr")
.map(line => new DateTime(line(0).toInt).toString("yyyy/MM/dd"))
There could be another way :
import com.github.nscala_time.time.Imports._
val date = (new DateTime() + ((threshold.toDouble)/1000).toInt.seconds )
.toString("yyyy/MM/dd")
Hope this helps :)
You needn't convert to String before applying toDataTime with nscala_time
import com.github.nscala_time.time.Imports._
scala> 1435655706000L.toDateTime
res4: org.joda.time.DateTime = 2015-06-30T09:15:06.000Z
`
I have solved this issue using the joda-time library by mapping on the DataFrame and converting the DateTime into a String :
import org.joda.time._
val time_col = sqlContext.sql("select ts from mr")
.map(line => new DateTime(line(0)).toString("yyyy-MM-dd"))
You can use the following syntax in Java
input.select("timestamp)
.withColumn("date", date_format(col("timestamp").$div(1000).cast(DataTypes.TimestampType), "yyyyMMdd").cast(DataTypes.IntegerType))
What you can do is:
input.withColumn("time", concat(from_unixtime(input.col("COL_WITH_UNIX_TIME")/1000,
"yyyy-MM-dd'T'HH:mm:ss"), typedLit("."), substring(input.col("COL_WITH_UNIX_TIME"), 11, 3),
typedLit("Z")))
where time is a new column name and COL_WITH_UNIX_TIME is the name of the column which you want to convert. This will give data in millis, making your data more accurate, like: "yyyy-MM-dd'T'HH:mm:ss.SSS'Z'"

Datetime conversion in Spark

It seems that I can't make date_format work. Using a format that I know work on my data (see below)
import org.apache.spark.sql.functions._
dat.withColumn("ts", date_format(dat("timestamp"), "MMM-dd-yyyy hh:mm:ss:SSS a (z)")).select("timestamp", "ts").first
I get
res310: org.apache.spark.sql.Row = [Aug-11-2016 09:21:43:749 PM (CEST),null]
Reading the doc I understand that date_format should accept any SimpleDateFormat. Is that correct?
I can make it work going through the pain of the code below:
val timestamp_parser = new SimpleDateFormat("MMM-dd-yyyy hh:mm:ss:SSS a (z)")
val udf_timestamp_string_to_long = udf[Long, String]( timestamp_parser.parse(_).getTime() )
val udf_timestamp_long_to_sql_timestamp = udf[Timestamp, Long]( new Timestamp(_) )
dat.withColumn("ts", udf_timestamp_long_to_sql_timestamp(udf_timestamp_string_to_long(dat("timestamp")))).select("timestamp", "ts").first
which gives
res314: org.apache.spark.sql.Row = [Aug-11-2016 09:21:43:749 PM (CEST),2016-08-11 21:21:43.749]