do $$
declare
tm1 timestamp without time zone;
tm2 timestamp without time zone;
begin
select localtimestamp(0) into tm1;
for i in 1..200000000 loop
--just waiting several second
end loop;
select localtimestamp(0) into tm2;
raise notice '% ; %', tm1, tm2;
end;
$$ language plpgsql
Why gives this procedure same values for tm1 and tm2 ?
Is not executed this code step by step?
From the manual
These SQL-standard functions all return values based on the start time of the current transaction [...] Since these functions return the start time of the current transaction, their values do not change during the transaction. This is considered a feature: the intent is to allow a single transaction to have a consistent notion of the "current" time, so that multiple modifications within the same transaction bear the same time stamp
(Emphasis mine)
You probably want clock_timestamp()
Related
Is there anything similar to setTimeout setTimeInterval in PostgreSQL which allows to execute piece of code (FUNCTION) at specified time interval?
As far as I know only thing that can execute a FUNCTION according to certain event is Triggers but it is not time based but operation driven (INSERT / UPDATE / DELETE / TRUNCATE)
While I could do this in application code, but prefer to have it delegated to database. Anyway I could achieve this in PostgreSQL? May be an extension?
Yes, there is a way to do this. It's called pg_sleep:
CREATE OR REPLACE FUNCTION my_function() RETURNS VOID AS $$
BEGIN
LOOP
PERFORM pg_sleep(1);
RAISE NOTICE 'This is a notice!';
END LOOP;
END;
$$ LANGUAGE plpgsql;
SELECT my_function();
This will raise the notice every second. You can also make it do other things instead of raising a notice.
OR
You can use PostgreSQL's Background Worker feature.
The following is a simple example of a background worker that prints a message every 5 seconds:
CREATE OR REPLACE FUNCTION print_message() RETURNS VOID AS $$
BEGIN
RAISE NOTICE 'Hello, world!';
END;
$$ LANGUAGE plpgsql;
CREATE OR REPLACE FUNCTION schedule_print_message() RETURNS VOID AS $$
DECLARE
job_id BIGINT;
BEGIN
SELECT bgw_start_recurring_job(
'print-message',
'print_message',
'5 seconds'
) INTO job_id;
END;
$$ LANGUAGE plpgsql;
SELECT schedule_print_message();
I just want to know if i can call a procedure automatically when something happens. In particular, i would want to call a function that deletes records from a table when now() timestamp is one month past the timestamp atribute of the record. Something like
create or replace procedure eliminate_when_time_past()
as $$
begin
if (now() is one month past the record timestamp) then
delete from preƱadas where hierro = ...
end if;
end;
$$
language plpgsql
Is it possible?
I need, in a plpgsql script, to get the execution time of the last query in a variable for several purposes (calculation, display and storage), so the psql \timing option is not what I look for because I can't manipulate the result time. Do you know if there is anything like the "get diagnostics" command, but for execution time (or work around) ?:
get diagnostics my_var := EXECUTION_TIME;
I couldn't find anything else than row_count and result_oid...
You can compare clock_timestamp() before and after the query:
do $$
declare t timestamptz := clock_timestamp();
begin
perform pg_sleep(random());
raise notice 'time spent=%', clock_timestamp() - t;
end
$$ language plpgsql;
Sample result:
NOTICE: time spent=00:00:00.59173
I'd like to compare two functions in pgAdmin's SQL editor. Here's the script. However, when I run it, seems the start_time and end_time have the same value no matter how many iterations. (But, the Query returned successfully with no result in nnn ms. message does go higher with every increase in the loop size.) Why?
DO
$$
DECLARE
start_time timestamp;
end_time timestamp;
diff interval;
BEGIN
SELECT now() INTO start_time;
FOR i IN 1..1000 LOOP
--PERFORM uuid_generate_v1mc();
PERFORM id_generator();
END LOOP;
SELECT now() INTO end_time;
SELECT end_time - start_time INTO diff;
RAISE NOTICE '%', start_time;
RAISE NOTICE '%', end_time;
RAISE NOTICE '%', diff;
END
$$
From the manual
Since these functions [CURRENT_TIMESTAMP, CURRENT_TIME, CURRENT_DATE] return the start time of the current transaction, their values do not change during the transaction.
And then further down:
transaction_timestamp() is equivalent to CURRENT_TIMESTAMP, but is named to clearly reflect what it returns. statement_timestamp() returns the start time of the current statement (more specifically, the time of receipt of the latest command message from the client). statement_timestamp() and transaction_timestamp() return the same value during the first command of a transaction, but might differ during subsequent commands. clock_timestamp() returns the actual current time, and therefore its value changes even within a single SQL command
So you probably want to use clock_timestamp()
Below is the function which i am running on two different tables which contains same column names.
-- Function: test(character varying)
-- DROP FUNCTION test(character varying);
CREATE OR REPLACE FUNCTION test(table_name character varying)
RETURNS SETOF void AS
$BODY$
DECLARE
recordcount integer;
j integer;
hstoredata hstore;
BEGIN
recordcount:=getTableName(table_name);
FOR j IN 1..recordcount LOOP
RAISE NOTICE 'RECORD NUMBER IS: %',j;
EXECUTE format('SELECT hstore(t) FROM datas.%I t WHERE id = $1', table_name) USING j INTO hstoredata;
RAISE NOTICE 'hstoredata: %', hstoredata;
END LOOP;
END;
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100
ROWS 1000;
When the above function is run on a table containing 1000 rows time taken is around 536 ms.
When the above function is run on a table containing 10000 rows time taken is around 27994 ms.
Logically time taken for 10000 rows should be around 5360 ms based on the calculation from 1000 rows, but the execution time is very high.
In order to reduce execution time, please suggest what changes to be made.
Logically time taken for 10000 rows should be around 5360 ms based on
the calculation from 1000 rows, but the execution time is very high.
It assumes that reading any particular row takes the same time as reading any other row, but this is not true.
For instance, if there's a text column in the table and it sometimes contains large contents, it's going to be fetched from TOAST storage (out of page) and dynamically uncompressed.
In order to reduce execution time, please suggest what changes to be
made.
To read all the table rows while not necessary fetching all in memory at once, you may use a cursor. That would avoid a new query at every loop iteration. Cursors accept dynamic queries through EXECUTE.
See Cursors in plpgsql documentation.
As far as I can tell you are over complicating things. As the "recordcount" is used to increment the ID values, I think you can do everything with a single statement instead of querying for each and every ID separately.
CREATE OR REPLACE FUNCTION test(table_name varchar)
RETURNS void AS
$BODY$
DECLARE
rec record;
begin
for rec in execute format ('select id, hstore(t) as hs from datas.%I', table_name) loop
RAISE NOTICE 'RECORD NUMBER IS: %',rec.id;
RAISE NOTICE 'hstoredata: %', rec.hs;
end loop;
end;
$BODY$
language plpgsql;
The only thing where this would be different than your solution is, that if an ID smaller than the count of rows in the table does not exist, you won't see a RECORD NUMBER message for that. But you would see ids that are bigger than the row count of the table.
Any time you execute the same statement again and again in a loop very, very loud alarm bells should ring in your head. SQL is optimized to deal with sets of data, not to do row-by-row processing (which is what your loop is doing).
You didn't tell us what the real problem is you are trying to solve (and I fear that you have over-simplified your example) but given the code from the question, the above should be a much better solution (definitely much faster).