substract the week day from a date in Hive SQL - date

I'm completely new to Hive SQL and I need to do the following.
I have a column which includes a date and what I would like to do is to create a new one which will be the Sunday before this date.
In xls I would write the following:
my_date-WEEKDAY(my_date,1)+1
and in sql:
DATEADD(DD, -(DATEPART(DW, my_date)-1), my_date)
I tried the following in Hive SQL:
DATE_SUB (my_date, date_format(my_date,'u')-1)
but date_format returns a string.
Any ideas?

Cast the result of date_format to int and do the arithmetic.
DATE_SUB(my_date,cast(date_format(my_date,'u') as int)%7)

Related

SAS Date conversion from text YYYY/MM to DATETIME20. Format

Hi I have a date format in one table in Text 'YYYY/MM'. example 2018/01, 2020/08 etc.
I need to join it with another table where the date is in Number type( and DATETIME20 format attached it it) and convert it to month and compare.
Is there any way to do it in PROC SQL as the rest of my query is in PROC SQL?
eg. Table 1: Month= 2018/01; Table 2: Date =20.01.2018 10:48:17 . They should be joined in the PROC SQL query.
I would also like to calculate difference in Months between these two dates.
Thank you in advance.
Convert both to the same DATE value. To convert a datetime value to a date use the DATEPART() function. To move to the first day of the month use the INTNX() function with the month interval. To convert a string like '2018/01' to a date you could use INPUT() function with YYMMDD informat by appending '/01'.
proc sql ;
create table want as
select *
from table1,table2
where input(cats(table1.month,'/01'),yymmdd10.)=intnx('month',datepart(table2.date),0)
;
quit;

pyspark : convert date from this format dd-MMM-yyyy hh:mm:ss to yyyy-mm-dd

Could someone help me out with this:
I am trying to convert date like 10-Jun-2018 09:59:51 to 2018-06-10 in spark but not getting any success
Basically i am querying a date field from an external table like:
select format_datetime(my_date, 'y-M-d') as dt from blahblah
The above works in Athena but not in pyspark.
I used the below code in pyspark but getting an empty string
select from_unixtime(UNIX_TIMESTAMP(my_date, 'yyyy-MM-dd')) as dt from blahblah
Note: my_date is of string data type in the external table and i need to extract the date part to create a partition on this field.
I appreciate any help in this regard!
You can try this in hive and spark-sql as well
Pyspark
sqlContext.sql("select from_unixtime(unix_timestamp('10-Jun-2018 09:59:51', 'dd-MMM-yyyy hh:mm:ss'), 'yyyy-MM-dd')").show()
HIVE
select from_unixtime(unix_timestamp('10-Jun-2018 09:59:51', 'dd-MMM-yyyy hh:mm:ss'), 'yyyy-MM-dd');
Refer this for the hive builtin date time UDFs.
And this for the timestamp formatting strings.

Postgresql date inconsistent format

I am building an app using node with a Postgres backend. I have a column for date where the data type is date. The dates in the database are stored in the format YYYY-MM-DD. However, when I do a query in my app, the date returned is in the format YYYY-MM-DD + (time zone information). For example:
2016-01-06T05:00:00.000Z
Does anyone know how I can prevent this?
Thanks in advance!
Use to_char
SELECT to_char(dateColumn, 'YYYY-MM-DD') AS formattedDate FROM ...
Example:
ubuntu=> SELECT to_char(to_timestamp('2016-01-07', 'YYYY-MM-DD'), 'YYYY-MM-DD');
to_char
------------
2016-01-07
(1 row)

DB2 VARCHAR_FORMAT works with a Timestamp but not a Date

I want to convert a Date column to a formatted string in DB2. This SQL works fine:
select varchar_format(current timestamp, 'YYYY-MM')
from sysibm.sysdummy1;
but this SQL gives an error:
select varchar_format(current date, 'YYYY-MM')
from sysibm.sysdummy1;
The error is:
[SQL0171] Argument 1 of function VARCHAR_FORMAT not valid.
In the first SQL, the first arg for VARCHAR_FORMAT is a timestamp, and that works. In the second SQL, the first arg for VARCHAR_FORMAT is a date, and that doesn't work.
The IBM doc implies that there's only this one function, VARCHAR_FORMAT (and its synonym, TO_CHAR).
How am I supposed to convert a DATE (not a TIMESTAMP) to a string? Or, do I have to convert the DATE to a TIMESTAMP first, then use VARCHAR_FORMAT?
I am running DB2 7.1 for i Series.
Update: converting to TIMESTAMP_ISO works. But it's ugly:
select varchar_format(timestamp_iso(current date), 'YYYY-MM')
from sysibm.sysdummy1;
That one works.
The documentation for the VARCHAR_FORMAT function in DB2 for i only mentions TIMESTAMP values, not DATE. Some DB2 platforms will implicitly cast a DATE value to a TIMESTAMP when the statement is calling a TIMESTAMP-only function or when comparing the DATE to a TIMESTAMP, but not all do.

how to insert a time in oracle 10g database

I want to insert date and time in oracle database, I have created the table with columns
create table myadmin
( employe_id number(5),
supervisor Varchar2(20),
department Varchar2(20),
action Varchar2(20),
sdate date,
stime date)
While inserting the values below it gives an error. Please tell me how to insert the time ?
insert into myadmin
( employe_id,supervisor,department,action,sdate,stime) values
(83,'gaurav','helpdesk','pick','23-jan-2013','09:43:00');
You have to use keyword to_date for date insert in oracle like this.
to_date('23-01-2013','dd-mm-yyyy')
Basically you have to use keyword to_date('your date','your date format').
You can also add date and time together if you want and it would be something like this
to_date('23-01-2013 09:43:00','dd-mm-yyyy hh24:mi:ss')
A date in Oracle always has a date part and a time part. Having date and time in two separate columns only makes sense, if it can occur that date is null and time is not. (And still, you could set date to an improbable value like 1.1.0001 then.)
However, if you want to stick to those two separate fields, then make your string a datetime with the to_date function specifying the format used:
insert into myadmin
( employe_id,supervisor,department,action,sdate,stime) values
(83,'gaurav','helpdesk','pick',to_date('23-01-2013','dd-mm-yyyy'), to_date('09:43:00', 'hh24:mi:ss'));