Python dateutil, check which value is given - python-dateutil

I am parsing an excel document and get date time values...
I am using dateutil to convert it to python datetime values...
For example if incoming value is '2012' i need a way to understand that only year is given
or if value is '13:05' only hour-minute is given.
from dateutil.parser import *
def convertToDatetime(inValue):
#if inValue is '2012' i need a way to understand only year is given. Can i do this while parse method is used?
rVal = parse( inValue )
return rVal
How can i accomplish this?
----- EDIT ----
I can ask the question in this way too:
If only the year is given dateutil completes the day and month with today's values... But i need null values for month and day if only year is given...

This feature is yet to be implemented in the dateutil module. You can modify this snippet as per your requirement though.
from dateutil import parser
from datetime import datetime
import warnings
def double_parse(dt_str):
dflt_1 = datetime(1, 1, 1)
dflt_2 = datetime(2, 2, 2)
dt1 = parser.parse(dt_str, default=dflt_1)
dt2 = parser.parse(dt_str, default=dflt_2)
if dt2.year != dt1.year:
warnings.warn('Year is missing!', RuntimeWarning)
return dt1.replace(year=datetime.now().year)
return dt1
if __name__ == "__main__":
print(double_parse('2017-04-05'))
# 2017-04-05 00:00:00
print(double_parse('03:45 EST'))
# stderr: RuntimeWarning: Year is missing!
# 2017-01-01 03:45:00-05:00

Related

write a function that generates a list of last days of each month for the past n months from current date

I am trying to create a list of the last days of each month for the past n months from the current date but not including current month
I tried different approaches:
def last_n_month_end(n_months):
"""
Returns a list of the last n month end dates
"""
return [datetime.date.today().replace(day=1) - datetime.timedelta(days=1) - datetime.timedelta(days=30*i) for i in range(n_months)]
somehow this partly works if each every month only has 30 days and also not work in databricks pyspark. It returns AttributeError: 'method_descriptor' object has no attribute 'today'
I also tried the approach mentioned in Generate a sequence of the last days of all previous N months with a given month
def previous_month_ends(date, months):
year, month, day = [int(x) for x in date.split('-')]
d = datetime.date(year, month, day)
t = datetime.timedelta(1)
s = datetime.date(year, month, 1)
return [(x - t).strftime('%Y-%m-%d')
for m in range(months - 1, -1, -1)
for x in (datetime.date(s.year, s.month - m, s.day) if s.month > m else \
datetime.date(s.year - 1, s.month - (m - 12), s.day),)]
but I am not getting it correctly.
I also tried:
df = spark.createDataFrame([(1,)],['id'])
days = df.withColumn('last_dates', explode(expr('sequence(last_day(add_months(current_date(),-3)), last_day(add_months(current_date(), -1)), interval 1 month)')))
I got the last three months (Sep, oct, nov), but all of them are the 30th but Oct has Oct 31st. However, it gives me the correct last days when I put more than 3.
What I am trying to get is this:
(last days of the last 4 months not including last_day of current_date)
daterange = ['2022-08-31','2022-09-30','2022-10-31','2022-11-30']
Not sure if this is the best or optimal way to do it, but this does it...
Requires the following package since datetime does not seem to have anyway to subtract months as far as I know without hardcoding the number of days or weeks. Not sure, so don't quote me on this....
Package Installation:
pip install python-dateutil
Edit: There was a misunderstanding from my end. I had assumed that all dates were required and not just the month ends. Anyways hope the updated code might help. Still not the most optimal, but easy to understand I guess..
# import datetime package
from datetime import date, timedelta
from dateutil.relativedelta import relativedelta
def previous_month_ends(months_to_subtract):
# get first day of current month
first_day_of_current_month = date.today().replace(day=1)
print(f"First Day of Current Month: {first_day_of_current_month}")
# Calculate and previous month's Last date
date_range_list = [first_day_of_current_month - relativedelta(days=1)]
cur_iter = 1
while cur_iter < months_to_subtract:
# Calculate First Day of previous months relative to first day of current month
cur_iter_fdom = first_day_of_current_month - relativedelta(months=cur_iter)
# Subtract one day to get the last day of previous month
cur_iter_ldom = cur_iter_fdom - relativedelta(days=1)
# Append to the list
date_range_list.append(cur_iter_ldom)
# Increment Counter
cur_iter+=1
return date_range_list
print(previous_month_ends(3))
Function to calculate date list between 2 dates:
Calculate the first of current month.
Calculate start and end dates and then loop through them to get the list of dates.
I have ignored the date argument, since I have assumed that it will be for current date. alternatively it can be added following your own code which should work perfectly.
# import datetime package
from datetime import date, timedelta
from dateutil.relativedelta import relativedelta
def gen_date_list(months_to_subtract):
# get first day of current month
first_day_of_current_month = date.today().replace(day=1)
print(f"First Day of Current Month: {first_day_of_current_month}")
start_date = first_day_of_current_month - relativedelta(months=months_to_subtract)
end_date = first_day_of_current_month - relativedelta(days=1)
print(f"Start Date: {start_date}")
print(f"End Date: {end_date}")
date_range_list = [start_date]
cur_iter_date = start_date
while cur_iter_date < end_date:
cur_iter_date += timedelta(days=1)
date_range_list.append(cur_iter_date)
# print(date_range_list)
return date_range_list
print(gen_date_list(3))
Hope it helps...Edits/Comments are welcome - I am learning myself...
I just thought a work around I can use since my last codes work:
df = spark.createDataFrame([(1,)],['id'])
days = df.withColumn('last_dates', explode(expr('sequence(last_day(add_months(current_date(),-3)), last_day(add_months(current_date(), -1)), interval 1 month)')))
is to enter -4 and just remove the last_date that I do not need days.pop(0) that should give me the list of needed last_dates.
from datetime import datetime, timedelta
def get_last_dates(n_months):
'''
generates a list of lastdates for each month for the past n months
Param:
n_months = number of months back
'''
last_dates = [] # initiate an empty list
for i in range(n_months):
last_dates.append((datetime.today() - timedelta(days=i*30)).replace(day=1) - timedelta(days=1))
return last_dates
This should give you a more accurate last_days

Add days to a datetime and convert it to a date ? - Odoo V14

i want to add 30 days to a datetime field and make my date field take this value as default.
I tried this, but don't work :/
from odoo import models, fields, api
from datetime import datetime
from dateutil.relativedelta import relativedelta
class crm_lead(models.Model):
_inherit = 'crm.lead'
date_deadline = fields.Datetime(string='Fermeture prévue')
#api.onchange('create_date')
def _onchange_enddate(self):
if self.create_date:
date_end = ( datetime.strptime(self.create_date, '%Y-%m-%d') + relativedelta(days =+ 30).strftime('%Y-%m-%d') )
self.date_deadline = date_end.date()
Thanks by advance !
The create_date is a magic field set in _create method. To use the creation date as a default value for the date_deadline field, you can use the default attribute and use the Date.today function to get the creation date.
Example:
date_deadline = fields.Datetime(default=lambda record: fields.Date.today() + relativedelta(days=30))
Does this work?:
from odoo import models, fields, api
from datetime import datetime, date, timedelta
class crm_lead(models.Model):
_inherit = 'crm.lead'
date_deadline = fields.Datetime(string='Fermeture prévue')
#api.onchange('create_date')
def _onchange_enddate(self):
if self.create_date:
date_end = ( date.today() + timedelta(30))
date_deadline = date_end
It gives the current date + 30 days.
It does not strip the datetime variable, so you might want to do that after.

Pyspark Getting the last date of the previous quarter based on Today's Date

In a code repo, using pyspark, I'm trying to use today's date and based on this I need to retrieve the last day of the prior quarter. This date would be then used to filter out data in a data frame. I was trying to create a dataframe in a code repo and that wasn't working. My code works in Code Workbook. This is my code workbook code.
import datetime as dt
import pyspark.sql.functions as F
def unnamed():
date_df = spark.createDataFrame([(dt.date.today(),)], ['date'])
date_df = date_df \
.withColumn('qtr_start_date', F.date_trunc('quarter', F.col('date'))) \
.withColumn('qtr_date', F.date_sub(F.col('qtr_start_date'), 1))
return date_df
Any help would be appreciated.
I got the following code to run successfully in a Code Repository:
from transforms.api import transform_df, Input, Output
import datetime as dt
import pyspark.sql.functions as F
#transform_df(
Output("/my/output/dataset"),
)
def my_compute_function(ctx):
date_df = ctx.spark_session.createDataFrame([(dt.date.today(),)], ['date'])
date_df = date_df \
.withColumn('qtr_start_date', F.date_trunc('quarter', F.col('date'))) \
.withColumn('qtr_date', F.date_sub(F.col('qtr_start_date'), 1))
return date_df
You'll need to pass the ctx argument into your transform, and you can make the pyspark.sql.DataFrame directly using the underlying spark_session variable.
If you already have the date column available in your input, you'll just need to make sure it's the Date type so that the F.date_trunc call works on the correct type.

Check if Day is Saturday or Sunday

I'm looking to do a difference between 2 date in scala. These 2 date are should not saturday not sunday.
I did a scala function to test if the day if Saturday or Sunday:
I edited my question, this is my code to test if a day is saturday or sunday.
I should use it using tow dates : start_date and finish_date, because after this operation I'll do the difference between these tow dates.
My function jourouvree took one parameter, not a dates.
How can I modify my code to pass the tow dates.
Check if Day is Saturday or Sunday:
import java.time.{LocalDate, DayOfWeek}
def isWeekend(day: LocalDate) =
day.getDayOfWeek == DayOfWeek.SATURDAY ||
day.getDayOfWeek == DayOfWeek.SUNDAY
Using Java 8 datetime api:
import java.time._
import java.time.format.DateTimeFormatter
val formatter = DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm:ss")
//Assume d2,d2 themselves are not weekend days
def jourOuvree(sd1:String, sd2:String): Unit = {
val d1 = LocalDateTime.parse(sd1, formatter)
val d2 = LocalDateTime.parse(sd2, formatter)
//return if d1 is not earlier than d2. TODO: handle what to be done
if(d2.isAfter(d1) == false) return
var (days, dayCtr) = (1, 1)
while(d1.plusDays(dayCtr).isBefore(d2)){
dayCtr+=1
val dow = d1.plusDays(dayCtr).getDayOfWeek()
if(!dow.equals(DayOfWeek.SATURDAY) && !dow.equals(DayOfWeek.SUNDAY))
days+=1
}
println(days)
}
Invoke as below:
jourOuvree("2011-03-31 07:55:00", "2011-04-06 15:41:00")
You get 5 printed.
NOTE: The code doesn't handle exceptions in parsing.
Also, there may be other requirement fine points, for which you are the best judge to make required changes.

Always get "1970" when extracting a year from timestamp

I have a timestamp like "1461819600". The I execute this code in a distributed environment as val campaign_startdate_year: String = Utils.getYear(campaign_startdate_timestamp).toString
The problem is that I always get the same year 1970. Which might be the reason of it?
import com.github.nscala_time.time.Imports._
def getYear(timestamp: Any): Int = {
var dt = 2017
if (!timestamp.toString.isEmpty)
{
dt = new DateTime(timestamp.toString.toLong).getYear // toLong should be multiplied by 1000 to get millisecond value
}
dt
}
The same issue occurs when I want to get a day of a month. I get 17 instead of 28.
def getDay(timestamp: Any): Int = {
var dt = 1
if (!timestamp.toString.isEmpty)
{
dt = new DateTime(timestamp.toString.toLong).getDayOfYear
}
dt
}
The timestamp you have is a number of seconds since 01-01-1970, 00:00:00 UTC.
Java (and Scala) usually use timestamps that are a number of milliseconds since 01-01-1970, 00:00:00 UTC.
In other words, you need to multiply the number with 1000.
The timestamp that you have seems to be in seconds since the epoch (i.e. a Unix timestamp). Java time utilities expect the timestamp to be in milliseconds.
Just multiply that value by 1000 and you should get the expected results.
You can rely on either on spark sql function which have some date utilities (get year/month/day, add day/month) or you can use JodaTime library to have more control over Date and DateTime, like in my answer here: How to replace in values in spark dataframes after recalculations?