How to reduce a week from Rundate in AWS Glue Pyspark - pyspark

I have a scenario where I am getting a rundate value getting passed in AWS Glue job as 'YYYY-MM-DD' format.
Lets say 2021-04-19.
Now, I am readin this rundate as 'datetime.strptime(rundate, "%y-%m-%d")'
But now i want to create 2 variables out of it variable A and variable B such as-
Variable A= rundate- 2 weeks (should save it in YYYYMMDD format)
Variable B = rundate- 1 week (should save it in YYYYMMDD format)
and then use this variables in filtering the data in data frame.

Use datetime lib use timedelta to subtract weeks/days..etc from your rundate.
Example:
Using Python:
import datetime
varA=datetime.datetime.strftime(datetime.datetime.strptime(rundate, "%Y-%m-%d")-datetime.timedelta(days=7),"%Y-%m-%d")
#'2021-04-12'
varB=datetime.datetime.strftime(datetime.datetime.strptime(rundate, "%Y-%m-%d")-datetime.timedelta(days=14),"%Y-%m-%d")
#'2021-04-05'
Using pyspark's Spark session:
rundate='2021-04-19'
varA=spark.sql(f"select string(date_sub('{rundate}',7))").collect()[0][0]
#'2021-04-12'
varB=spark.sql(f"select string(date_sub('{rundate}',14))").collect()[0][0]
#'2021-04-05'

Related

DataBricks 10.2 pyspark 3.2.0; How Do I Add a New Timestamp Column Based on Another Date and Integer (Hours) Column?

In DataBricks notebook using pyspark I need to create/add a new timestamp column based on an existing date column while adding hours to it based on an existing hours-bin integer column - this is to support the creation of an event-driven time-series feature set, which requires in this case that the timestamp be limited to date and hour (no minutes, seconds, etc...). I have tried using string-based expr(), date_add(), various formatted-string and cast() combinations but I get a maddening slew of errors related to column access, parsing issues and the like. What is the simplest way to accomplish this?
In my opinion, unix_timestamp is the simplest method:
dfResult = dfSource.withColumn("yourNewTimestampColName",
(unix_timestamp(col("yourExistingDateCol")) +
(col("yourExistingHoursCol")*3600)).cast("timestamp"))
Where yourNewTimestampColName represents the name of the timestamp column that you want to add, yourExistingDateCol represents a date column that must be present with this name within the dfSource dataframe and yourExistingHoursCol represents an integer-based hour column that must also be present with this name within the dfSource dataframe.
The unix_timestamp() method adds to the date in seconds, so to add hours multiply yourExistingHoursCol by 3,600 seconds, to add minutes multiply by 60, to add days multiply 3,600*24, etc....
Executing display(dfResult) should show structure/content of the dfSource dataframe with a new column named yourNewTimestampColName containing the date/hour combination requested.

How to read only latest 7 days csv files from S3 bucket

I am trying to figure it out, how we can read only latest 7 days file from a folder which we have in s3 bucket using Spark Scala.
Directory which we have:
Assume for today's date(Date_1) we have 2 clients and 1-1 csv file
Source/Date_1/Client_1/sample_1.csv
Source/Date_1/Client_2/sample_1.csv
Tomorrow a new folder will generate and we will get as below:
Source/Date_2/Client_1/sample_1.csv
Source/Date_2/Client_2/sample_1.csv
Source/Date_2/Client_3/sample_1.csv
Source/Date_2/Client_4/sample_1.csv
NOTE: we expecting to have newer client data added on any date.
Likewise on 7th day we can have:
Source/Date_7/Client_1/sample_1.csv
Source/Date_7/Client_2/sample_1.csv
Source/Date_7/Client_3/sample_1.csv
Source/Date_7/Client_4/sample_1.csv
So, now if we get 8th day data, We need to discard the Date_1 folder to get read.
How we can do this while reading csv files using spark scala from s3 bucket?
I am trying to read the whole "source/*" folder so that we should not miss if any client is getting added any time/day.
There are various ways to do it. One of the ways is mentioned below:
You can extract the Date from Path and the filter is based on the 7 Days.
Below is a code snippet for pyspark, the same can be implemented in Spark with Scala.
>>> from datetime import datetime, timedelta
>>> from pyspark.sql.functions import *
#Calculate date 7 days before date
>>> lastDate = datetime.now() + timedelta(days=-7)
>>> lastDate = int(lastDate.strftime('%Y%m%d'))
# Source Path
>>> srcPath = "s3://<bucket-name>/.../Source/"
>>> df1 = spark.read.option("header", "true").csv(srcPath + "*/*").withColumn("Date", split(regexp_replace(input_file_name(), srcPath, ""),"/")[0].cast("long"))
>>> df2 = df1.filter(col("Date") >= lit(lastDate))
There are few things that might change in your final implementation, such as Index value [0] that might differ if the path structure is different and the last, the condition >= that can be > based on the requirement.

Convert to date in cloud datafusion

How do we convert a string to date in cloud datafusion?
I have a column with the value say 20191120 (format of yyyyMMdd) i want to load this into a table in bigquery as date. The table column datatype is also date.
What i have tried so far is that i converted the string to timestamp using "parse-as-simple-date" and i try to convert it to format using format-date to "yyyy-MM-dd", but this step converts it to string and the final load fails. I have even tried to explicitly mention the column as date in the o/p schema as date. But it fails at runtime.
I tried keeping it as timestamp in the pipeline and try loading the date into Bigquery date type.
I noticed in the error that came op was field dt_1 incompatible with avro integer. Is datafusion internally converting the extract into avro before loading. AVRO does not have a date datatype which is causing the isssue?
Adding answer for posterity:
You can try doing these,
Go to LocalDateTime column in wrangler
Open dropdown and click on "Custom Transform"
Type timestamp.toLocalDate() (timestamp being the column name)
After the last step it should convert it into LocalDate type which you can write to bigquery. Hope this helps
For this specific date format, the Wrangler Transform directive would be:
parse-as-simple-date date_field_dt yyyyMMdd
set-column date_field_dt date_field_dt.toLocalDate()
The second line is required if the destination is of type Date.
Skip empty values:
set-column date_field_dt empty(date_field_dt) ? date_field_dt : date_field_dt.toLocalDate()
References:
https://github.com/data-integrations/wrangler/blob/develop/wrangler-docs/directives/parse-as-simple-date.md
https://github.com/data-integrations/wrangler/blob/develop/wrangler-docs/directives/parse-as-date.md
You could try to parse your input data with Data Fusion using Wrangler.
In order to test it out I have replicated a workflow where a Data Fusion pipeline is fed with data coming from BigQuery. This data is then parsed to the proper type and then it is exported back again to BigQuery. Note that the public dataset is “austin_311” and I have used ‘’311_request’ table as some of their columns are TIMESTAMP type.
The steps I have done are the following:
I have queried a public dataset that contained TIMESTAMP data using:
select * from `bigquery-public-data.austin_311.311_request`
limit 1000;
I have uploaded it to Google Cloud Storage.
I have created a new Data Fusion batch pipeline following this.
I have used the Wrangler to Parse CSV data to custom 'Simple Data' yyyy-MM-dd HH:mm:ss
I have exported Pipeline results to BigQuery.
This qwiklab has helped me through the steps.
Result:
Following the above procedure I have been able to export Data Fusion data to BigQuery and the DATE fields are exported as TIMESTAMP, as expected.

Spark - How to get the latest hour in S3 path?

I'm using a Databricks notebook with Spark and Scala to read data from S3 into a DataFrame:
myDf = spark.read.parquet(s"s3a://data/metrics/*/*/*/). where * wildcards represent year/month/day.
Or I just hardcode it: myDf = spark.read.parquet(s"s3a://data/metrics/2018/05/20/)
Now I want to add an hour parameter right after the day. The idea is to obtain data from S3 for the most recently available hour.
If I do myDf = spark.read.parquet(s"s3a://data/metrics/2018/05/20/*) then I'll get data for all hours of may 20th.
How is it possible to achieve this in a Databricks notebook without hardcoding the hour?
Use timedate function
from datetime import datetime, timedelta
latest_hour = datetime.now() - timedelta(hours = 1)
You can also split them by year, month, day, hour
latest_hour.year
latest_hour.month
latest_hour.day
latest_hour.hour

pyspark converting unix time to date

I am using the following code to convert a column of unix time values into dates in pyspark:
transactions3=transactions2.withColumn('date', transactions2['time'].cast('date'))
The column transactions2['time'] contains the unix time values. However the column date which I create here has no values in it (date = None for all rows). Any idea why this would be?
Use from_unixtime. expr("from_unixtime(timeval)")