What is a good way to track the timings of queries in postgresql without looking in the log file? - postgresql

I've got a function in a postgres database that does a lot of analysis; it consists of a succession of update and insert statements and eventually throws back some output. I'd like to figure out which statements execute slowly, without looking through the log files. (I'm much more comfortable with SQL than I am with, say, perl, to write date / time arithmetic queries in order to spot problems.)
I have a table, activity_log:
CREATE TABLE activity_log
(
action character varying(250),
action_date date,
action_tune time without time zone
);
then throughout my function, after each INSERT / UPDATE I write statements like
INSERT INTO activity_log (action_date, action_tune, action)
VALUES (current_date, current_timestamp, 'INSERT to base_model');
So the function looks something like this:
CREATE FUNCTION rebucket(pos_control character varying, absolute_max_cpc numeric, absolute_max_bucket character varying)
RETURNS integer AS
$BODY$
DECLARE qty INT;
BEGIN
INSERT INTO activity_log (action_date, action_tune, action)
VALUES (current_date, current_timestamp, 'Off we go');
-- Do something that takes 5 minutes
INSERT INTO activity_log (action_date, action_tune, action)
VALUES (current_date, current_timestamp, 'INSERT to base_model');
-- Then do something else that also takes about 5 minutes ...
INSERT INTO activity_log (action_date, action_tune, action)
VALUES (current_date, current_timestamp, 'INSERT to diagnostics');
END
$BODY$
LANGUAGE plpgsql VOLATILE
I've got away with this in other databases in the past, but when I try this approach in Postgres (9.1 on Windows 7), then whenever I run the whole function the date and time in activity_log is exactly the same for every statement within the function: in the example above,
SELECT * FROM activity_log
gets me
Off we go 2013-05-13 12:33:23:386
INSERT to base_model 2013-05-13 12:33:23:386
INSERT to diagnostics 2013-05-13 12:33:23:386
(The function takes from 5 minutes to an hour to run, depending on what parameters we feed it, and it has upwards of 20 different statements within there, so it seems highly unlikely that every statement completed within the same 1/100th of a second.)
Why is that?

The timestamp you are using always gives the start of the current transaction. If you look in the manuals you will see that you want clock_timestamp().

Related

Postgresql Trigger / Transaction Behaviour

To aid me with debugging another issue in my database, I've written the following function as a trigger in postgresql:
CREATE OR REPLACE FUNCTION stage.triggerlogfunction()
RETURNS trigger
LANGUAGE plpgsql
AS $function$
begin
insert into stage.temptriggerlog_complete
values
(current_timestamp, 'Hello');
perform pg_sleep(5);
insert into stage.temptriggerlog_complete
values
(current_timestamp, 'Did');
perform pg_sleep(5);
insert into stage.temptriggerlog_complete
values
(current_timestamp, 'This Work?');
return null;
END;
$function$
;
When a new row is inserted into a table in my database, this trigger is set to fire. This works as expected, but what I was expecting to see when this ran was three rows with timestamps that are 5 seconds apart (running in sequence with a 5 second delay between each), but the actual result I had returned was three rows all with the same timestamp.
Why is this happening? Is there a way I can force the behaviour that I was expecting?
is there any way I can get the current timestamp at the time each insert statement runs?
You're looking for statement_timestamp() (which would be the timestamp of the statement causing your trigger to fire) or even clock_timestamp() then.
See the documentation for what they do and the alternatives.

Does CLOCK_TIMESTAMP from a BEFORE trigger match log/commit order *exactly* in PG 12.3?

I've got a Postgres 12.3 question: Can I rely on CLOCK_TIMESTAMP() in a trigger to stamp an updated_dts timestamp in exactly the same order as changes are committed to the permanent data?
On the face of it, this might sound like kind of an silly question, but I just spent two tracking down a super rare race condition in a non-Postgres system that hinged on exactly this behavior. (Lagging commits made their 'last value seen' tracking data unreliable.) Now I'm trying to figure out if it's possible for CLOCK_TIMESTAMP() to not match the order of changes recorded in the WAL perfectly.
It's simple to see how this could occur with NOW/TRANSACTION_TIMESTAMP/CURRENT_TIMESTAMP as they're returning the transaction start time, not the completion time. It's pretty easy, in that case, to record a timestamp sequence where the stamps and log order don't agree. But I can't figure out if there's any chance for commits to be saved in a different order to the BEFORE trigger CLOCK_TIMESTAMP() values.
For background, we need a 100% reliable timeline for an external search to use. As I understand it, I can create one using logical replication, and a replication-target side trigger to stamp changes as they're replayed from the log. What I'm unclear on, is if it's possible to get the same fidelity from CLOCK_TIMESTAMP() on a single server.
I haven't got the chops to get deep into the Postgres internals, and see how requests are interleaved, nor how granular execution is, and am hoping that someone here knows definitively. If this is more of a question for one of the PG mailing lists, please let me know.
-- Thanks
Below is a bit of sample code for how I'm looking at building the timestamps. It works fine, but doesn't prove anything about behavior with lots of concurrent processes.
---------------------------------------------
-- Create the trigger function
---------------------------------------------
DROP FUNCTION IF EXISTS api.set_updated CASCADE;
CREATE OR REPLACE FUNCTION api.set_updated()
RETURNS TRIGGER
AS $BODY$
BEGIN
NEW.updated_dts = CLOCK_TIMESTAMP();
RETURN NEW;
END;
$BODY$
language plpgsql;
COMMENT ON FUNCTION api.set_updated() IS 'Sets updated_dts field to CLOCK_TIMESTAMP(), if the record has changed..';
---------------------------------------------
-- Create the table
---------------------------------------------
DROP TABLE IF EXISTS api.numbers;
CREATE TABLE api.numbers (
id uuid NOT NULL DEFAULT extensions.gen_random_uuid (),
number integer NOT NULL DEFAULT NULL,
updated_dts timestamptz NOT NULL DEFAULT 'epoch'::timestamptz
);
---------------------------------------------
-- Define the triggers (binding)
---------------------------------------------
-- NOTE: I'm guessing that in production that I can use DEFAULT CLOCK_TIMESTAMP() instead of a BEFORE INSERT trigger,
-- I'm using a distinct DEFAULT value, as I want it to pop out if I'm not getting the trigger to fire.
CREATE TRIGGER trigger_api_number_before_insert
BEFORE INSERT ON api.numbers
FOR EACH ROW
EXECUTE PROCEDURE set_updated();
CREATE TRIGGER trigger_api_number_before_update
BEFORE UPDATE ON api.numbers
FOR EACH ROW
WHEN (OLD.* IS DISTINCT FROM NEW.*)
EXECUTE PROCEDURE set_updated();
---------------------------------------------
-- INSERT some data
---------------------------------------------
INSERT INTO numbers (number) values (1),(2),(3);
---------------------------------------------
-- Take a look
---------------------------------------------
SELECT * from numbers ORDER BY updated_dts ASC; -- The values should be listed as 1, 2, 3 as oldest to newest.
---------------------------------------------
-- UPDATE a row
---------------------------------------------
UPDATE numbers SET number = 11 where number = 1;
---------------------------------------------
-- Take a look
---------------------------------------------
SELECT * from numbers ORDER BY updated_dts ASC; -- The values should be listed as 2, 3, 11 as oldest to newest.
No, you cannot depend on clock_timestamp() order during trigger execution (or while evaluating a DEFAULT clause) being the same as commit order.
Commit will always happen later than the function call, and you cannot control how long it takes between them.
But I am surprised that that is a problem for you. Typically, the commit time is not visible or relevant. Why don't you simply accept the clock_timestamp() as the measure of things?

postgres high CPU usage on after insert trigger

I have an application where I receive a stream of ticks (buys or sells of a commodity) and am trying to generate a table of minutely OHLC (open, high, low, close) columns with this data. The reason I am creating these in a table rather than deriving them from the tick table is due to the high volume of ticks I get (10000000 per day). Using this strategy I can delete all the ticks from the database on a schedule to keep my database size manageable.
My schema is roughly equivalent to this (unnecessary columns remove for brevity).
CREATE TABLE tick (
executed TIMESTAMP WITH TIME ZONE NOT NULL,
price NUMERIC
);
CREATE TABLE ohlc_minute (
created TIMESTAMP WITH TIME ZONE NOT NULL PRIMARY KEY,
open NUMERIC,
high NUMERIC,
low NUMERIC,
close NUMERIC,
);
My idea was to create an after insert trigger on tick which computes the last minute of OHLC and upserts this into the ohlc_minute table but with this trigger enabled the cpu usage on the database jumps to 100% almost instantly.
CREATE OR REPLACE FUNCTION update_ohlc()
RETURNS trigger AS
$BODY$
BEGIN
INSERT INTO ohlc_minute (created, open, high, low, close)
SELECT
date_trunc('minute', NEW.executed) executed,
(array_agg(price ORDER BY executed ASC))[1] as open,
MAX(price) as high,
MIN(price) as low,
(array_agg(price ORDER BY executed DESC))[1] as close
FROM tick
WHERE executed BETWEEN date_trunc('minute', NEW.executed) AND date_trunc('minute', NEW.executed) + interval '1 Min'
ON CONFLICT (created)
DO UPDATE
SET open = EXCLUDED.open, high=EXCLUDED.high, low=EXCLUDED.low, close=EXCLUDED.close;
RETURN NEW;
END;
$BODY$
LANGUAGE plpgsql;
CREATE TRIGGER tick_insert
AFTER INSERT
ON tick
FOR EACH ROW
EXECUTE PROCEDURE update_ohlc();
One possibly alternative I have is just to run an equivalent function manually on a schedule to update all ohlc bars but I like the idea of always having up to date partial (eg current bar less than one minute) ohlc information available. Is there any easy optimisations I can make to lower the CPU usage of my trigger function?
Are ticks guaranteed to arrive in order? If the insert succeeds, than your aggregation had been done over only one row, so the answer to all aggregations is just the price. If the insert conflicts, then you should be able to compute each value based on the just the existing and the excluded one.
CREATE OR REPLACE FUNCTION update_ohlc()
RETURNS trigger AS
$BODY$
BEGIN
INSERT INTO ohlc_minute (created, open, high, low, close)
values (
date_trunc('minute', NEW.executed),
NEW.price,
NEW.price,
NEW.price,
NEW.price
)
ON CONFLICT (created)
DO UPDATE
SET high=greatest(ohlc_minute.high,EXCLUDED.high),
low=least(ohlc_minute.low,EXCLUDED.low),
close=EXCLUDED.close;
RETURN NEW;
END;
$BODY$
LANGUAGE plpgsql;
If they are not guaranteed to arrive in order, then I think your current solution would be about optimal, if you insist on having partial results available within the accruing minute.
I solved my own problem, the answer was the obvious not having an index, I created an index
CREATE INDEX IF NOT EXISTS execute_index ON tick (executed);
and CPU usage has fallen to an acceptable level, I would however still be interested to see optimized solutions.

postgres trigger function only inserts few records in another table

I have 2 tables hourly and daily and my aim is to calculate average of values from hourly table and save it to daily table. I have written a trigger function like this -
CREATE OR REPLACE FUNCTION public.calculate_daily_avg()
RETURNS trigger AS
$BODY$
DECLARE chrly CURSOR for
SELECT device, date(datum) datum, avg(cpu_util) cpu_util
FROM chourly WHERE date(datum) = current_date group by device, date(datum);
BEGIN
FOR chrly_rec IN chrly
LOOP
insert into cdaily (device, datum, cpu_util)
values (chrly_rec.device, chrly_rec.datum, chrly_rec.cpu_util)
on conflict (device, datum) do update set
device=chrly_rec.device, datum=chrly_rec.datum, cpu_util=chrly_rec.cpu_util;
return NEW;
END LOOP;
EXCEPTION
WHEN NO_DATA_FOUND THEN
RAISE NOTICE 'NO DATA IN chourly FOR %', current_date;
END;
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;
ALTER FUNCTION public.calculate_daily_avg()
OWNER TO postgres;
and a trigger on hourly table like this -
CREATE TRIGGER calculate_daily_avg_trg
BEFORE INSERT OR UPDATE
ON public.chourly
FOR EACH ROW
EXECUTE PROCEDURE public.calculate_daily_avg();
But when I try to insert or update about 3000 records in the hourly table only 3 or 4 devices are inserted and not 3000. (also in trigger I have already tried AFTER INSERT OR UPDATE but even that gives the same result) What I am doing wrong here? Please suggest any better way to write the trigger if you feel I have written it wrongly. Thanks!
I don't suggest using TRIGGER for calculation when INSERT. Try different approach using function execute by cron hourly or daily.
WHY?
Because everytime you INSERT one row. the postgres will always do aggregate function AVG() and LOOPING for insert (based on your flow).
That mean another INSERT statement will waiting until Previous INSERT commited that will suffer your database performance for highly INSERT Transaction. If somehow you manage to BREAK the rule (maybe from configuration) you will get inconsistent data like what happened to you right now.

PostgreSQL, triggers, and concurrency to enforce a temporal key

I want to define a trigger in PostgreSQL to check that the inserted row, on a generic table, has the the property: "no other row exists with the same key in the same valid time" (the keys are sequenced keys). In fact, I has already implemented it. But since the trigger has to scan the entire table, now i'm wondering: is there a need for a table-level lock? Or this is managed someway by the PostgreSQL itself?
Here is an example.
In the upcoming PostgreSQL 9.0 I would have defined the table in this way:
CREATE TABLE medicinal_products
(
aic_code CHAR(9), -- sequenced key
full_name VARCHAR(255),
market_time PERIOD,
EXCLUDE USING gist
(aic_code CHECK WITH =,
market_time CHECK WITH &&)
);
but in fact I have been defined it like this:
CREATE TABLE medicinal_products
(
PRIMARY KEY (aic_code, vs),
aic_code CHAR(9), -- sequenced key
full_name VARCHAR(255),
vs DATE NOT NULL,
ve DATE,
CONSTRAINT valid_time_range
CHECK (ve > vs OR ve IS NULL)
);
Then, I have written a trigger that check the costraint: "two distinct medicinal products can have the same code in two different periods, but not in same time".
So the code:
INSERT INTO medicinal_products VALUES ('1','A','2010-01-01','2010-04-01');
INSERT INTO medicinal_products VALUES ('1','A','2010-03-01','2010-06-01');
return an error.
One solution is to have a second table to use for detecting clashes, and populate that with a trigger. Using the schema you added into the question:
CREATE TABLE medicinal_product_date_map(
aic_code char(9) NOT NULL,
applicable_date date NOT NULL,
UNIQUE(aic_code, applicable_date));
(note: this is the second attempt due to misreading your requirement the first time round. hope it's right this time).
Some functions to maintain this table:
CREATE FUNCTION add_medicinal_product_date_range(aic_code_in char(9), start_date date, end_date date)
RETURNS void STRICT VOLATILE LANGUAGE sql AS $$
INSERT INTO medicinal_product_date_map
SELECT $1, $2 + offset
FROM generate_series(0, $3 - $2)
$$;
CREATE FUNCTION clr_medicinal_product_date_range(aic_code_in char(9), start_date date, end_date date)
RETURNS void STRICT VOLATILE LANGUAGE sql AS $$
DELETE FROM medicinal_product_date_map
WHERE aic_code = $1 AND applicable_date BETWEEN $2 AND $3
$$;
And populate the table first time with:
SELECT count(add_medicinal_product_date_range(aic_code, vs, ve))
FROM medicinal_products;
Now create triggers to populate the date map after changes to medicinal_products: after insert calls add_, after update calls clr_ (old values) and add_ (new values), after delete calls clr_.
CREATE FUNCTION sync_medicinal_product_date_map()
RETURNS trigger LANGUAGE plpgsql AS $$
BEGIN
IF TG_OP = 'UPDATE' OR TG_OP = 'DELETE' THEN
PERFORM clr_medicinal_product_date_range(OLD.aic_code, OLD.vs, OLD.ve);
END IF;
IF TG_OP = 'UPDATE' OR TG_OP = 'INSERT' THEN
PERFORM add_medicinal_product_date_range(NEW.aic_code, NEW.vs, NEW.ve);
END IF;
RETURN NULL;
END;
$$;
CREATE TRIGGER sync_date_map
AFTER INSERT OR UPDATE OR DELETE ON medicinal_products
FOR EACH ROW EXECUTE PROCEDURE sync_medicinal_product_date_map();
The uniqueness constraint on medicinal_product_date_map will trap any products being added with the same code on the same day:
steve#steve#[local] =# INSERT INTO medicinal_products VALUES ('1','A','2010-01-01','2010-04-01');
INSERT 0 1
steve#steve#[local] =# INSERT INTO medicinal_products VALUES ('1','A','2010-03-01','2010-06-01');
ERROR: duplicate key value violates unique constraint "medicinal_product_date_map_aic_code_applicable_date_key"
DETAIL: Key (aic_code, applicable_date)=(1 , 2010-03-01) already exists.
CONTEXT: SQL function "add_medicinal_product_date_range" statement 1
SQL statement "SELECT add_medicinal_product_date_range(NEW.aic_code, NEW.vs, NEW.ve)"
PL/pgSQL function "sync_medicinal_product_date_map" line 6 at PERFORM
This depends on the values being checked for having a discrete space- which is why I asked about dates vs timestamps. Although timestamps are, technically, discrete since Postgresql only stores microsecond-resolution, adding an entry to the map table for every microsecond the product is applicable for is not practical.
Having said that, you could probably also get away with something better than a full-table scan to check for overlapping timestamp intervals, with some trickery on looking for only the first interval not after or not before... however, for easy discrete spaces I prefer this approach which IME can also be handy for other things too (e.g. reports that need to quickly find which products are applicable on a certain day).
I also like this approach because it feels right to leverage the database's uniqueness-constraint mechanism this way. Also, I feel it will be more reliable in the context of concurrent updates to the master table: without locking the table against concurrent updates, it would be possible for a validation trigger to see no conflict and allow inserts in two concurrent sessions, that are then seen to conflict when both transaction's effects are visible.
Just a thought, in case the valid time blocks could be coded with a number or something, creating a UNIQUE index on Id+TimeBlock would be blazingly fast and resolve all table lock problems.
It is managed by PostgreSQL itself. On a select it acquires an ACCESS_SHARE lock which means that you can query the table but do not perform updates.
A radical solution which might help you is to use a cache like ehcache or memcached to store the id/timeblock info and not use the postgresql at all. Many can be persisted so they would survive a server restart and they do not exhibit this locking behavior.
Why can't you use a UNIQUE constraint? Will be much faster (it's an index) and easier.