DB2 seconds to time conversion issue - date

Problem : some row has dirty data which cant be converted as date, So query failing
in db2 date is stored in seconds
while transferring data to sql server we are converting to datetime
Query to convert as datetime
select TIMESTAMP('1970-01-01', '00:00:00') +(Startdate/1000) SECONDS from tablename
some row has dirty data which cant be converted as date
Need query to find error data
Desired query :
select TIMESTAMP('1970-01-01', '00:00:00') +(Startdate/1000) SECONDS
from tablename
where iserror (TIMESTAMP('1970-01-01', '00:00:00') +(Startdate/1000) SECONDS) = 1

You may create a scalar function to suppress such an error:
--#SET TERMINATOR #
create or replace function ms2ts(p_milliseconds bigint)
returns timestamp
contains sql
deterministic
no external action
begin
declare continue handler for sqlexception begin end;
return TIMESTAMP('1970-01-01', '00:00:00') +(p_milliseconds/1000) SECONDS;
end#
-- Usage:
select *
from
(
select ms2ts(startdate) ts, Startdate
from table(values
power(bigint(2), 45)
, power(bigint(2), 60)
) tablename (Startdate)
)
-- where ts is null
#
TS STARTDATE
--------------------- ----------------------
3084-12-12 12:41:28.0 35184372088832
<null> 1152921504606846976

Related

Postgresql function with values (from another table) as arguments

I can't figure out how to call a function with inputs specified from another table.
Let us assume the following function is being used to create a time interval:
create or replace function interval_generator(dt_start timestamp with TIME ZONE,
dt_end timestamp with TIME ZONE,
round_interval INTERVAL)
returns TABLE(time_start timestamp with TIME ZONE,
time_end timestamp with TIME ZONE) as $$
BEGIN
return query
SELECT
(n) time_start,
(n + round_interval) time_end
FROM generate_series(date_trunc('minute', dt_start), dt_end, round_interval) n;
END
$$
LANGUAGE 'plpgsql';
Let us create a dummy table for the minimal example:
DROP TABLE IF EXISTS lookup;
CREATE TEMP TABLE lookup
as
select *
from (
VALUES
('2017-08-17 04:00:00.000'::timestamp),
('2017-08-17 05:00:00.000'::timestamp),
('2017-08-18 06:00:00.000'::timestamp)
) as t (datetime);
Now my attempt is as follows:
select interval_generator(
SELECT datetime FROM lookup Order By datetime limit 1,
SELECT datetime FROM lookup Order By datetime Desc limit 1,
'1 hours'::interval
);
and it just yields the generic error ERROR: syntax error at or near "SELECT"
Enclose the SELECT statements in parentheses to make them expressions like this:
select * from interval_generator(
(SELECT datetime FROM lookup Order By datetime limit 1),
(SELECT datetime FROM lookup Order By datetime Desc limit 1),
'1 hours'::interval
);
Please note that
SELECT datetime FROM lookup Order By datetime limit 1
is exactly
SELECT min(datetime) FROM lookup
which seems to me better readable. As the function body of interval_generator comprises of a single SQL query why don't you make it a plain SQL function instead of pl/pgsql?
<your-function-declaration> as $$
SELECT
(n) time_start,
(n + round_interval) time_end
FROM generate_series(date_trunc('minute', dt_start), dt_end, round_interval) n;
$$
LANGUAGE 'sql';

"ERROR: query has no destination for result data", when trying to return results in a temp table. Using Postgres 13.1

The code is simply looping to load a row for every 15 minutes from the start time to ending time, within one day. Once I get this working I will be adding more data into the temp table before it gets passed back to the caller.
I will be doing more processing, later on, by loading more data into that temp table, I just wanted to see if I could load the temp table and pass it back out of the function. I get the error as in the "Title Line" ERROR: query has no destination for result data when trying to return results in a temp table. Using Postgres 13.1 Any help would be greatly appreciated . . . Thanks!
CREATE OR REPLACE FUNCTION uf_getAllProviderAppts(
in
in_loc_id bigint,
in_appt_str_dt date,
in_appt_str_tm char(8),
in_appt_end_dt date,
in_appt_end_tm char(8),
in_duration int,
in_provider bigint
)
RETURNS table( loc_id bigint, appt_str_dt date, appt_str_tm char(8), appt_end_dt date, appt_end_tm char, staff_id int, pat_name varchar, visit_reason varchar ) AS
$BODY$
DECLARE
appts tb_tmpOpenAppts%rowtype;
DECLARE
wk_str_tm time without time zone;
wk_end_tm time without time zone;
begin
CREATE TEMP TABLE IF NOT EXISTS tb_tmpOpenAppts
(
loc_id int8 not null,
appt_str_dt date not null,
appt_str_tm time without time zone,
appt_end_dt date not null,
appt_end_tm time without time zone,
staff_id int8 not null,
pat_name varchar(50) not null,
visit_reason varchar(128)
);
select cast( in_appt_str_tm as time) as wk_str_tm ;
select cast( in_appt_end_tm as time) as wk_end_tm ;
/* Loop creating open time slots for this Provider */
Loop
insert into tb_tmpOpenAppts(loc_id, appt_str_dt, appt_str_tm, appt_end_dt, appt_end_tm, staff_id, pat_name, visit_reason )
values(1, in_appt_str_dt, wk_str_tm, in_appt_end_dt, wk_end_tm, 4, 'Open', '');
set wk_str_tm = wk_end_tm ;
wk_end_tm = (select wk_end_tm + in_duration) ;
if wk_end_tm > cast(in_appt_end_dt as time) then
Exit;
end if;
End Loop ;
RETURN Query
select * from tb_tmpOpenAppts where loc_id > 0 ;
IF NOT FOUND THEN
RAISE EXCEPTION 'No Appointments at %.', $2;
END IF;
/* Loop to return all open appts - Testing only */
/*----------------------------------------------*/
/* LOOP
RETURN NEXT appts;
END LOOP;*/
RETURN;
END;
$BODY$
LANGUAGE plpgsql;
Following is the select I was using to test it with:
SELECT * FROM uf_getallproviderappts( 1, date('12-10-2020'), '08:00:00', date('12-10-2020'), '17:00:00', 15, 4 );
Also, I had to create a dummy table to get the thing to syntax check ok.

File is interpreted as a variable

Wrote and tested a pretty simple script in SQL. Now I need to get it working in Postgres (which I'm just learning). Can't figure out the latest error. The script is reading a file to be a variable. Perhaps I'm not using DBeaver correctly (which I'm also trying to learn). Basically, when data is over 90 days old, move date from transactions to archive_transactions and delete change_log records. The error and code is below:
SQL Error [42601]: ERROR: "archived_transactions_prerun" is not a known variable
Position: 325
CODE:
CREATE OR REPLACE FUNCTION Archiving ()
RETURNS void AS $$
declare
BEGIN
DROP TABLE IF EXISTS Archived_Transactions_PreRun;
DROP TABLE IF EXISTS Change_Log_PreRun;
DROP TABLE IF EXISTS Transactions_PreRun;
--COMMIT;
SELECT *
INTO Archived_Transactions_PreRun
FROM Archived_Transactions;
SELECT *
INTO Change_Log_PreRun
FROM Change_Log;
SELECT *
INTO Transactions_PreRun
FROM Transactions;
COMMIT;
-- Create reporting table entries
----------------------------------
DECLARE YYYY_MM_DD DATE = (SELECT CONVERT (DATE, GETDATE())) -- Run Date
, Report_Date DATE = (SELECT DATEADD (DAY, -90, GETDATE())) -- 90 Days ago
, To_Archive FLOAT
, Chg_Log FLOAT;
-- Count records to be archived
-------------------------------
SET Chg_Log = (SELECT COUNT(*) FROM Change_Log WHERE date_updated < Report_Date);
SET To_Archive = (SELECT COUNT(*) FROM transactions WHERE date < Report_Date);
-- If nothing to archive, exit
------------------------------
IF Chg_Log > 0
OR To_Archive > 0;
BEGIN
-- Remove 90+ records from change_log
-------------------------------------
DELETE
FROM change_log
WHERE date_updated < Report_Date;
-- Copy 90+ records to Archived_Transactions
--------------------------------------------
INSERT INTO Archived_Transactions
SELECT *
FROM Transactions
WHERE [date] < Report_Date;
-- Remove 90+ records from transactions
---------------------------------------
DELETE
FROM Transactions
WHERE date < Report_Date;
COMMIT;
END;
END;
$$
LANGUAGE 'plpgsql' VOLATILE
COST 100

variable date filter on large table too slow

I have a huge table with 25 million records, and a system that has a scheduled task that can execute a query. The query needs to quickly pick up latest records by create date(timestamp) column and apply some calculations. The problem with this is that the date is also kept in a table and with each execution it is updated to the latest execution date. It does work but it is very slow:
select * from request_history
where createdate > (select startdate from request_history_config)
limit 10;
it takes about 20 seconds to complete, which is rediculously slow compared to this:
set custom.startDate = '2019-06-13T18:02:04';
select * from request_history
where createdate > current_setting('custom.startDate')::timestamp
limit 10;
and this query finishes well within 100 miliseconds. The problem with this is that I can't update and save the date for the next execution! I was looking for SET variable TO statement that would allow me to grab some value from a table but all these attempts are not working:
select set_config('custom.startDate', startDate, false) from request_history_config;
// ERROR: function set_config(unknown, timestamp without time zone, boolean) does not exist
set custom.startDate to (select startDate from request_history_config);
// ERROR: syntax error at or near "("
You can do this using function like this:
CREATE OR REPLACE FUNCTION get_request_history()
RETURNS TABLE(createdate timestamp)
LANGUAGE plpgsql
AS $function$
DECLARE
start_date timestamp;
BEGIN
SELECT startdate INTO start_date FROM request_history_config;
RETURN QUERY
SELECT *
FROM request_history h
WHERE h.createdate > start_date
LIMIT 10;
END
$function$
And then use the function to get values:
select * from get_request_history()

PostgreSQL stored procedure data parameter

I have the following stored procedure, which returns 0 results but if the run the query by itself it result lot of results. What am i missing.
CREATE OR REPLACE FUNCTION countStatistics(baselineDate Date) RETURNS int AS $$
DECLARE
qty int;
BEGIN
SELECT COUNT(*) INTO qty FROM statistics WHERE time_stamp = baselineDate;
RETURN qty;
END;
$$ LANGUAGE plpgsql;
--Execute the function
SELECT countStatistics('2015-01-01 01:00:00') as qty;
return 0 results
SELECT COUNT(*) FROM statistics WHERE time_stamp = '2015-01-01 01:00:00';
return 100+ results
You're declaring your baselineDate parameter as a date:
CREATE OR REPLACE FUNCTION countStatistics(baselineDate Date)
but using it as a timestamp:
SELECT COUNT(*) INTO qty FROM statistics WHERE time_stamp = baselineDate;
You're getting an implicit cast so countStatistics('2015-01-01 01:00:00') will actually execute this SQL:
SELECT COUNT(*) INTO qty FROM statistics WHERE time_stamp = '2015-01-01';
and, after the date is implicitly cast back to a timestamp, it will effectively be this:
SELECT COUNT(*) INTO qty FROM statistics WHERE time_stamp = '2015-01-01 00:00:00';
Try changing your function declaration to use a timestamp:
CREATE OR REPLACE FUNCTION countStatistics(baselineDate timestamp)