How can i split timestamp to Date and time? - scala

//loading DF
val df1 = spark.read.option("header",true).option("inferSchema",true).csv("time.csv ")
//
+-------------+
| date_time|
+-----+-------+
|1545905416000|
+-----+-------+
when i use the cast to change the column value to DateType, it shows error
=> the datatype is not matching (date_time : bigint)in df
df1.withColumn("date_time", df1("date").cast(DateType)).show()
Any solution for solveing it???
i tried doing
val a = df1.withColumn("date_time",df1("date").cast(StringType)).drop("date").toDF()
a.withColumn("fomatedDateTime",a("date_time").cast(DateType)).show()
but it does not work.

Welcome to StackOverflow!
You need to convert the timestamp from epoch format to date and then do the computation. You can try this:
import spark.implicits._
val df = spark.read.option("header",true).option("inferSchema",true).csv("time.csv ")
val df1 = df.withColumn(
"dateCreated",
date_format(
to_date(
substring(
from_unixtime($"date_time".divide(1000)),
0,
10
),
"yyyy-MM-dd"
)
,"dd-MM-yyyy")
)
.withColumn(
"timeCreated",
substring(
from_unixtime($"date_time".divide(1000)),
11,
19
)
)
Sample data from my usecase:
+---------+-------------+--------+-----------+-----------+
| adId| date_time| price|dateCreated|timeCreated|
+---------+-------------+--------+-----------+-----------+
|230010452|1469178808000| 5950.0| 22-07-2016| 14:43:28|
|230147621|1469456306000| 19490.0| 25-07-2016| 19:48:26|
|229662644|1468546792000| 12777.0| 15-07-2016| 07:09:52|
|229218611|1467815284000| 9996.0| 06-07-2016| 19:58:04|
|229105894|1467656022000| 7700.0| 04-07-2016| 23:43:42|
|230214681|1469559471000| 4600.0| 27-07-2016| 00:27:51|
|230158375|1469469248000| 999.0| 25-07-2016| 23:24:08|
+---------+-------------+--------+-----------+-----------+
You need to adjust the time. By default it would be your timezone. For me it's GMT +05:30. Hope it helps.

Related

Convert event time into date and time in Pyspark?

I have below event_time in my data frame
I would like to convert the event_time into date/time. Used below code, however it's not coming properly
import pyspark.sql.functions as f
df = df.withColumn("date", f.from_unixtime("Event_Time", "dd/MM/yyyy HH:MM:SS"))
df.show()
I am getting below output and it's not coming properly
Can anyone advise how to do this properly as I am new to pyspark?
Seems that your data is in Microseconds (1/1,000,000 second) so you would have to divide by 1,000,000
df = spark.createDataFrame(
[
('1645904274665267',),
('1645973845823770',),
('1644134156697560',),
('1644722868485010',),
('1644805678702121',),
('1645071502180365',),
('1644220446396240',),
('1645736052650785',),
('1646006645296010',),
('1644544811297016',),
('1644614023559317',),
('1644291365608571',),
('1645643575551339',)
], ['Event_Time']
)
import pyspark.sql.functions as f
df = df.withColumn("date", f.from_unixtime(f.col("Event_Time")/1000000))
df.show(truncate = False)
output
+----------------+-------------------+
|Event_Time |date |
+----------------+-------------------+
|1645904274665267|2022-02-26 20:37:54|
|1645973845823770|2022-02-27 15:57:25|
|1644134156697560|2022-02-06 08:55:56|
|1644722868485010|2022-02-13 04:27:48|
|1644805678702121|2022-02-14 03:27:58|
|1645071502180365|2022-02-17 05:18:22|
|1644220446396240|2022-02-07 08:54:06|
|1645736052650785|2022-02-24 21:54:12|
|1646006645296010|2022-02-28 01:04:05|
|1644544811297016|2022-02-11 03:00:11|
|1644614023559317|2022-02-11 22:13:43|
|1644291365608571|2022-02-08 04:36:05|
|1645643575551339|2022-02-23 20:12:55|
+----------------+-------------------+

Convert seconds to hhmmss Spark

I have a UDF that creates a timestamp out of 2 field values with date and time. However, the field with time is of seconds format.
So how сan I merge 2 fields of type date and seconds into an hour of a type Unix timestamp?
My current implementation looks like this:
private val unix_epoch = udf[Long, String, String]{ (date, time) =>
deltaDateFormatter.parseDateTime(s"$date $formatted").getSeconds
}
def transform(inputDf: DataFrame): Unit = {
inputDf
.withColumn("event_hour", unix_epoch($"event_date", $"event_time"))
.withColumn("event_ts", from_unixtime($"event_hour").cast(TimestampType))
}
Input data:
event_date,event_time
20170501,87721
20170501,87728
20170501,87721
20170501,87726
Desired output:
event_tmstp, event_hour
2017-05-01 00:22:01,1493598121
2017-05-01 00:22:08,1493598128
2017-05-01 00:22:01,1493598121
2017-05-01 00:22:06,1493598126
Update. data schema:
event_date: string (nullable = true)
event_time: integer (nullable = true)
Cast event_date to a unix timestamp, add the event_time column to get event_hour, and convert back to normal timestamp event_tmstp.
PS I'm not sure why event_time has 86400 seconds (1 day) more. I needed to subtract that to get your expected output.
val df = Seq(
("20170501", 87721),
("20170501", 87728),
("20170501", 87721),
("20170501", 87726)
).toDF("event_date","event_time")
val df2 = df.select(
unix_timestamp(to_date($"event_date", "yyyyMMdd")) + $"event_time" - 86400
).toDF("event_hour").select(
$"event_hour".cast("timestamp").as("event_tmstp"),
$"event_hour"
)
df2.show
+-------------------+----------+
| event_tmstp|event_hour|
+-------------------+----------+
|2017-05-01 00:22:01|1493598121|
|2017-05-01 00:22:08|1493598128|
|2017-05-01 00:22:01|1493598121|
|2017-05-01 00:22:06|1493598126|
+-------------------+----------+
Check below code if this helps without UDF
val df = Seq(
(20170501,87721),
(20170501,87728),
(20170501,87721),
(20170501,87726)
).toDF("date","time")
df
.withColumn("date",
to_date(
unix_timestamp($"date".cast("string"),
"yyyyMMdd"
).cast("timestamp")
)
)
.withColumn(
"event_hour",
unix_timestamp(
concat_ws(
" ",
$"date",
from_unixtime($"time","HH:mm:ss.S")
).cast("timestamp")
)
)
.withColumn(
"event_ts",
from_unixtime($"event_hour")
)
.show(false)
+----------+-----+----------+-------------------+
|date |time |event_hour|event_ts |
+----------+-----+----------+-------------------+
|2017-05-01|87721|1493598121|2017-05-01 00:22:01|
|2017-05-01|87728|1493598128|2017-05-01 00:22:08|
|2017-05-01|87721|1493598121|2017-05-01 00:22:01|
|2017-05-01|87726|1493598126|2017-05-01 00:22:06|
+----------+-----+----------+-------------------+

Scala: For loop on dataframe, create new column from existing by index

I have a dataframe with two columns:
id (string), date (timestamp)
I would like to loop through the dataframe, and add a new column with an url, which includes the id. The algorithm should look something like this:
add one new column with the following value:
for each id
"some url" + the value of the dataframe's id column
I tried to make this work in Scala, but I have problems with getting the specific id on the index of "a"
val k = df2.count().asInstanceOf[Int]
// for loop execution with a range
for( a <- 1 to k){
// println( "Value of a: " + a );
val dfWithFileURL = dataframe.withColumn("fileUrl", "https://someURL/" + dataframe("id")[a])
}
But this
dataframe("id")[a]
is not working with Scala. I could not find solution yet, so every kind of suggestions are welcome!
You can simply use the withColumn function in Scala, something like this:
val df = Seq(
( 1, "1 Jan 2000" ),
( 2, "2 Feb 2014" ),
( 3, "3 Apr 2017" )
)
.toDF("id", "date" )
// Add the fileUrl column
val dfNew = df
.withColumn("fileUrl", concat(lit("https://someURL/"), $"id"))
.show
My results:
Not sure if this is what you require but you can use zipWithIndex for indexing.
data.show()
+---+---------------+
| Id| Url|
+---+---------------+
|111|http://abc.go.org/|
|222|http://xyz.go.net/|
+---+---------------+
import org.apache.spark.sql._
val df = sqlContext.createDataFrame(
data.rdd.zipWithIndex
.map{case (r, i) => Row.fromSeq(r.toSeq:+(s"""${r.getString(1)}${i+1}"""))},
StructType(data.schema.fields :+ StructField("fileUrl", StringType, false))
)
Output:
df.show(false)
+---+---------------+----------------+
|Id |Url |fileUrl |
+---+---------------+----------------+
|111|http://abc.go.org/|http://abc.go.org/1|
|222|http://xyz.go.net/|http://xyz.go.net/2|
+---+---------------+----------------+

Change column value in a dataframe spark scala

This is how my dataframe looks like at the moment
+------------+
| DATE |
+------------+
| 19931001|
| 19930404|
| 19930603|
| 19930805|
+------------+
I am trying to reformat this string value to yyyy-mm-dd hh:mm:ss.fff and keep it as a string not a date type or time stamp.
How would I do that using the withColumn method ?
Here is the solution using UDF and withcolumn, I have assumed that you have a string date field in Dataframe
//Create dfList dataframe
val dfList = spark.sparkContext
.parallelize(Seq("19931001","19930404", "19930603", "19930805")).toDF("DATE")
dfList.withColumn("DATE", dateToTimeStamp($"DATE")).show()
val dateToTimeStamp = udf((date: String) => {
val stringDate = date.substring(0,4)+"/"+date.substring(4,6)+"/"+date.substring(6,8)
val format = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss")
format.format(new SimpleDateFormat("yyy/MM/dd").parse(stringDate))
})
withClumn("date",
from_unixtime(unix_timestamp($"date", "yyyyMMdd"), "yyyy-MM-dd hh:mm:ss.fff") as "date")
this should work.
Another notice is the that mm gives minutes and MM gives months, hope this help you.
First, I created this DF:
val df = sc.parallelize(Seq("19931001","19930404","19930603","19930805")).toDF("DATE")
For date management we are going to use joda time Library (don't forget to join the joda-time.jar file)
import org.joda.time.format.DateTimeFormat
import org.joda.time.format.DateTimeFormatter
def func(s:String):String={
val dateFormat = DateTimeFormat.forPattern("yyyymmdd");
val resultDate = dateFormat.parseDateTime(s);
return resultDate.toString();
}
Finally, apply the function to dataframe:
val temp = df.map(l => func(l.get(0).toString()))
val df2 = temp.toDF("DATE")
df2.show()
This answer still needs some work, me myself is new to spark, but it is getting the job done, I think!

Reading a full timestamp into a dataframe

I am trying to learn Spark and I am reading a dataframe with a timestamp column using the unix_timestamp function as below:
val columnName = "TIMESTAMPCOL"
val sequence = Seq(2016-01-20 12:05:06.999)
val dataframe = {
sequence.toDF(columnName)
}
val typeDataframe = dataframe.withColumn(columnName, org.apache.spark.sql.functions.unix_timestamp($"TIMESTAMPCOL"))
typeDataframe.show
This produces an output:
+------------+
|TIMESTAMPCOL|
+------------+
| 1453320306|
+------------+
How can I read it so that I don't lose the ms i.e the .999 part? I tried using unix_timestamp(col: Col, s: String) where s is the SimpleDateFormat, eg "yyyy-MM-dd hh:mm:ss", without any luck.
To retain the milliseconds use "yyyy-MM-dd HH:mm:ss.SSS" format. You can use date_format like below.
val typeDataframe = dataframe.withColumn(columnName, org.apache.spark.sql.functions.date_format($"TIMESTAMPCOL","yyyy-MM-dd HH:mm:ss.SSS"))
typeDataframe.show
This will give you
+-----------------------+
|TIMESTAMPCOL |
+-----------------------+
|2016-01-20 12:05:06:999|
+-----------------------+