Why does DATE in DB2 database have a time component in it? - date

How can I make the column data type be DATE like YYYY-MM-DD;
when I create a table with the Data type DATE, it will become TIMESTAMP(0)
When I ALTER SET DATA TYPE DATE, it still is TIMESTAMP(0)
and SELECT CHAR(CURRENT DATE, ISO) FROM SYSIBM.SYSDUMMY1;
it will be error with SQLCODE=-171, CURRENT DATE is 2017-02-28 19:19:09.0
it's too long.
database info:DB2 linux x64 10.5
CREATE TABLE "XCRSUSR"."TIMP_TASK_SERIAL" (
"SERIAL_NO" DECIMAL(16 , 0),
"TASK_NAME" VARCHAR(10),
"TASK_TYPE" DOUBLE,
"TASK_XML" CLOB(10) INLINE LENGTH 164,
"SEND_TIME" DATE,
"FINISH_TIME" DATE,
"TASK_STATUS" DOUBLE DEFAULT 0,
"RUN_TYPE" DOUBLE,
"FLAG" DOUBLE,
"TASK_ID" VARCHAR(10)
)
ORGANIZE BY ROW
DATA CAPTURE NONE
IN "CREDIT_U_16" INDEX IN "CREDIT_INDEX_16"
COMPRESS NO;
ALTER TABLE TIMP_TASK_SERIAL ALTER COLUMN SEND_TIME SET DATA TYPE DATE;
select CURRENT DATE from SYSIBM.SYSDUMMY1;
1
---------------------
2017-02-28 19:19:09.0

Check out the settings of the
Oracle_Compatibility
vector under
https://www.ibm.com/support/knowledgecenter/SSEPGG_11.1.0/com.ibm.db2.luw.apdv.porting.doc/doc/r0052867.html
Bit Position 7 in table 1 is wjat you are looking for.

Related

TIMESTAMP- creation_date :: date between '2022-05-15' and '2022-06-15'

I just wanted to know the difference between these two codes:
select count (user_id) from tb_users where
creation_date :: date between '2022-05-15' and '2022-06-15'
Result: 41,232
select count (user_id) from tb_users where
creation_date between '2022-05-15' and '2022-06-15'
Result: 40,130
As far as I see, it is related with the timestamp, but I do not understand the difference.
Thank you!
Your column creation_date in the table is most probably in timestamp format, which is '2022-05-15 00:00:00'. By adding ::date <- you are casting your timestamp format to date format: '2022-05-15'.
You can read more about casting data types here:
https://www.postgresqltutorial.com/postgresql-tutorial/postgresql-cast/
When you ask Postgres to implicitly coerce a DATE value to a TIMESTAMP value - the hours, minutes and seconds are set to zero.
In the first query, you explicitly cast the creation date to DATE which is successfully compared to the provided DATE values.
In the second query, the creation date is of type TIMESTAMP and so PostgreSQL converts your DATE values to TIMESTAMP values and the comparison becomes
creation_date >= '2022-05-15 00:00:00' AND creation_date <= '2022-06-15 00:00:00'
Obviously, this produces different resultset than the first query.

UnixTime saved as int, how to query as Date. Cassandra CQLSH

Im building the pipeline: sensorData - MQTT broker - Kafka - Cassandra. Payload is transferred as JSON and when saving in Cassandra, date was save as int. I can't get the readable date when query CQLSH.
CREATE TABLE sensordata.mqttsensordata (
sensor text,
temperature float,
humidity int,
timestamp int,
battery int,
calibratedhumidity int,
datetime timestamp,
receiver text,
rssi float,
voltage float,
PRIMARY KEY (
(sensor, temperature, humidity),
timestamp
)
How do I get the readable timestamp when query the database like the picture below?
Query table
The only time function that can operate on a UNIX timestamp is [min|max]timeuuid (mintimeuuid() or maxtimeuuid()). You can use either of those on the timestamp column, and nest it inside the toTimestamp() function.
For example, if I have a table of sample times:
> SELECT * FROM sample_times WHERE a=1;
a | b | c | d
---+---------------------------------+--------------------------------------+---------------
1 | 2021-08-08 21:42:54.131000+0000 | 96594031-f891-11eb-b7bc-e12958c8479f | 1628458974131
Column d is a bigint where I have stored the UNIX timestamp. I can show that as a timestamp like this:
> SELECT totimestamp(mintimeuuid(d)) FROM sample_times WHERE a=1;
system.totimestamp(system.mintimeuuid(d))
-------------------------------------------
2021-08-08 21:42:54.131000+0000
(1 rows)

Converting Integer values to Date in Presto SQL

Below is a script i am trying to run in Presto; Subtracting today's date from an integer field I am attempting to convert to date. To get the exacts days between. Unfortunately, it seems the highlighted block does not always convert the date correctly and my final answer is not correct. Please does anyone know another way around this or a standard method on presto of converting integer values to date.
Interger value in the column is in the format '20191123' for year-month-date
select ms, activ_dt, current_date, date_diff('day',act_dt,current_date) from
(
select ms,activ_dt, **CAST(parse_datetime(CAST(activ_dt AS varchar), 'YYYYMMDD') AS date) as act_dt**, nov19
from h.A_Subs_1 where msisdn_key=23480320012
) limit 19
You can convert "date as a number" (eg. 20180527 for May 27, 2018) using the following:
cast to varchar
parse_datetime with appropriate format
cast to date (since parse_datetime returns a timestamp)
Example:
presto> SELECT CAST(parse_datetime(CAST(20180527 AS varchar), 'yyyyMMdd') AS date);
_col0
------------
2018-05-27
You can use below sample query for your requirement:
select date_diff('day', date_parse('20191209', '%Y%m%d'), current_timestamp);

Teradata : BTEQ Import Invalid Date Issue

I am trying to port data from a flat file to TD via BTEQ.
The table definition is :
CREATE MULTISET TABLE _module_execution_log
(
system_id INTEGER,
process_id INTEGER,
module_id INTEGER,
julian_dt INTEGER,
referral_dt DATE FORMAT 'YYYY-MM-DD',
start_dt_tm TIMESTAMP(6),
end_dt_tm TIMESTAMP(6),
ref_s_cnt INTEGER,
ref_d_cnt INTEGER)
PRIMARY INDEX ( module_id );
Following are 2 sample records that i am trying to load in the table :1|1|30|2007073|Mar 14 2007 12:00:00:000AM|Mar 15 2007 1:27:00:000PM|Mar 15 2007 1:41:08:686PM|0|0
1|1|26|2007073|Mar 14 2007 12:00:00:000AM|Mar 15 2007 1:27:00:000PM|Mar 15 2007 1:59:40:620PM|0|0
Snippet for my BTEQ script
USING
( system_id INTEGER
,process_id INTEGER
,module_id INTEGER
,julian_dt INTEGER
,referral_dt DATE FORMAT 'YYYY-MM-DD'
,start_dt_tm TIMESTAMP
,end_dt_tm TIMESTAMP
,ref_s_cnt INTEGER
,ref_d_cnt INTEGER
)
INSERT INTO _module_execution_log
( system_id
,process_id
,module_id
,julian_dt
,referral_dt
,start_dt_tm
,end_dt_tm
,ref_s_cnt
,ref_d_cnt
)
VALUES (
:system_id
,:process_id
,:module_id
,:julian_dt
,:referral_dt
,:start_dt_tm
,:end_dt_tm
,:ref_s_cnt
,:ref_d_cnt);
I get the following error during import :
*** Failure 2665 Invalid date.
Statement# 1, Info =5
*** Failure 2665 Invalid date.
Statement# 1, Info =5
The issue is surely with the exported date in 5th column. I cannot modify the export query. I tried the following in the bteq but still failed : cast(cast(substr(:referral_dt,1,11) as date format 'MMMBDDBYYYY') as date format 'YYYY-MM-DD')
Your data is pipe-delimited variable length characters and the USING should match the input data, e.g.
system_id VARCHAR(11)
referral_dt VARCHAR(26)
The VarChars will be automatically casted to the target datatypes using a default format. For your Timestamps you need to cast manually adding a format:
referral_dt (TIMESTAMP(3),FORMAT 'mmmBddByyyyBhh:mi:ss.s(3)T')
But this will fail for a single digit hour, Teradata always wants two digits.
If you're on TD14 you better utilize the Oracle TO_DATE/TO_TIMESTAMP UDFs which allow single digit hours:
TO_TIMESTAMP(referral_dt,'MON DD YYYY HH:MI:SS:FF3AM')
You do not have a date indicated by your data.
First 4 values expected are integer, then a date, then a timestamp:
system_id INTEGER,
process_id INTEGER,
module_id INTEGER,
julian_dt INTEGER,
**referral_dt DATE FORMAT 'YYYY-MM-DD'**,
start_dt_tm TIMESTAMP(6), ...
Your data doesn't match:
1|1|30|2007073|Mar 14 2007 12:00:00:000AM|Mar 15 2007 1:27:00:000PM|Mar 15 2007 1:41:08:686PM|0|0
you are missing the date:
1|1|30|2007073|**????-??-??**| Mar 14 2007 12:00:00:000AM|...

Avoiding default date value to be used when only time is provided to a datetime field on Redshift

I created a table with a datetime field "dt". Using COPY command to load data. The corresponding value for the field from the file is just the hour information, i.e., say, 14:50:00. So, the value being stored is 1900-01-01 14:50:00. I don't need the date part. How to do that.
Or may be an alternate datatype which can store only time.
Amazon Redshift supports only date(year month day) and timestamp(year month day hour minute second) format, and it doesn't support time(hour minute second) format of Postgresql.
In my idea, there are two ways to work around.
As #Damien_The_Unbeliever mentioned, ignore the date part of the timestamp format.
create table date_test(id int, timestamp timestamp);
insert into date_test2 values (1, '1900-01-01 14:50:00');
insert into date_test2 values (2, '1900-01-01 17:20:00');
select * from date_test2 where timestamp > '1900-01-01 14:50:00';
select * from date_test where date_test.timestamp > '1900-01-01 14:50:00';
id | timestamp
----+---------------------
2 | 1900-01-01 17:20:00
(1 row)
Use char or varchar type to store the time value.
create table date_test2(id int, timestamp char(8));
insert into date_test2 values (1, '14:50:00');
insert into date_test2 values (2, '17:20:00');
select * from date_test2 where timestamp > '14:50:00';
id | timestamp
----+-----------
2 | 17:20:00
(1 row)
The second solution looks easier, but it is worse performance as Redshift doc says. If you store a large amount of data, you should consider of the first one.
Here are the related links to the document about date/time column.
http://docs.aws.amazon.com/redshift/latest/dg/c_best-practices-timestamp-date-columns.html
http://docs.aws.amazon.com/redshift/latest/dg/r_Datetime_types.html