Running a Postgresql Query at a specific time - postgresql

Scenario
I have a table which contains tasks that need to be completed by a specific datetime. If the task is not completed by this datetime (+- variable interval) then it will run a script to 'escalate' this task.
This variable interval can be as small as 2 seconds or as large as 2 years
Thoughts so far
Running a cron job every second either via pg_cron or similar will technically allow me to do a check on the database every second, however there is a lot of wasted processing here and a lot of database overhead and i'd rather not do this if possible.
Triggers can be fired on row insert/update/delete. so worst case scenario is we have an external script watching for these triggers being fired.
Question
Is there a way to schedule a query to run at a specific time, ideally within postgresql itself rather than via a bash/cron script. ie:
at 2017-09-30 09:32:00 - select * from table where datetime <= now
Edit
As it came up in the comments PGAgent is a possibility and the scenario for such would be:
The task is created by the user in the application, and the due date is set (eg 2017-09-28 13:00:00) the user has an interval before/after this due date where the task is escalated (eg One Hour Before) so at 12:00:00 on 2017-09-28 i want PGAgent/other option to run my sql script that does the escalation.
The script to escalate is already written and works, the date and time for this PGAgent script to be run is already calculated by another script.

Related

Is there a way to run the DataStage jobs based on different dates?

I have a table containing the dates for the ETL jobs to be run.
I do know that using the schedule function in DataStage director able to schedule the jobs run on a specific date or recurring weekly/monthly. However, in my case, the date will change.
For example, Job A need to run every mid of Feb, May, and August.
Is there any way I can achieve this?
One option could be a DataStage sequence that runs regularily (i.e. daily) checking if one of your run dates is reached. This could be checked within the sequence and if the condition is fulfilled run the job.
If you choose to try it you need a job which selects the dates tables - you could compare the date already in the SQL with the current date and then sendthe date to a file or any other flag. Read the file within the sequence and if your check condition is true run whatever job you need to run.

PostgreSQL: how to simulate long running query [duplicate]

To test my application I need a query to run for a long time (at least a couple of minutes). Any ideas on how to create this quickly?
The application needs to query the catalog to see a list of running queries.
My application uses postgresql. I am fine with creating additional dummy tables if required.
this will run for 5 minutes:
select pg_sleep(5 * 60);
the parameter for pg_sleep() is the duration in seconds.
You can also sleep until a specific timestamp using pg_sleep_until()
More details in the manual:
http://www.postgresql.org/docs/9.5/static/functions-datetime.html#FUNCTIONS-DATETIME-DELAY

postgresql triggers: How to get a trigger when table is not getting new data

I have a remote program which keeps inserting new row every second in a table of postgresql db.
Sometimes the program stops inserting new row, due to some wifi problem. At that time, is there a way i can get some notification when no new row is added in last 10 seconds.
Currently I run a cron job every sec, which keeps checking the recent id from the table. If recent id does not change after 10s then I create notification.
Actually I think, the cronjob is your best bet.
There is no AFTER NOTHING HAPPENS in the CREATE TRIGGER syntax ;)
The other options you have is to move the job to the database using pg_cron or background worker. But I really think any of the two options (specially the second one) is complicating things for no gain.

Postgres: Count all INSERT queries executed in the past 1 minute

I can do currently active count of all INSERT queries executed on the PostgreSQL server like this:
SELECT count(*) FROM pg_stat_activity where query like 'INSERT%'
But is there a way to count all INSERT queries executed on the server in a given period of time? E.g. in the past minute?
I have a bunch of tables into which I send a lot of inserts and I would like to somehow aggregate how many rows I am inserting per minute. I could code a solution for this, but it'd be so much easier if this was possible to somehow extract directly from the server.
Any type of stats like this, in a certain period of time, would be very helpful, an average time it takes for the query to process, or knowing the bandwidth that goes through per minute, etc.
Note: I am using PostgreSQL 12
If not already done, install pg_stat_statements extension and take some snapshots of the view pg_stat_statements: the diff will give the number of queries executed between 2 snapshots.
Note: It doesn’t save each individual query, rather it parameterizes them and then saves the aggregated result.
See https://www.citusdata.com/blog/2019/02/08/the-most-useful-postgres-extension-pg-stat-statements/
I believe that you can use the audit trigger.
This audit will create a table that register INSERT, UPDATE and DELETE actions. So you can adapt. So every time that your database runs one of those commands, the audit table register the action, the table and the time of the action. So, it will be easy to do a COUNT() on desired table with a WHERE from a minute ago.
I couldn't come across anything solid, so I have created a table where I log a number of insert transactions using a script that runs as a cron job. It was simple enough to implement and I do not get estimations, but the real values instead. I actually count all new rows inserted to tables in a given interval.

DB2: How to timely delete records

I have a table in DB2 database that has a few columns, one of which is L_TIMESTAMP. The need is to delete records where difference between L_TIMESTAMP and CURRENT TIMESTAMP is greater than 5 minutes. This check needs to happen every hour. Please let me know if there is an approach to accomplish this at the DB end rather than scheduling a cron job at the appserver end.
The administrative task scheduler in DB2 would be a good way to accomplish this. You need to wrap the DELETE statement in a stored procedure, then submit it to the scheduler. The syntax for defining the schedule is based on cron but it is all handled inside DB2.
http://pic.dhe.ibm.com/infocenter/db2luw/v9r7/index.jsp?topic=%2Fcom.ibm.db2.luw.admin.gui.doc%2Fdoc%2Fc0054380.html