How to run Postgres pg_cron Job AFTER another Job? - postgresql

I running some automated tasks on my postgres database at night using the pg_cron extension. I am moving certain old records to archive database tables. I am running 5 Stored Procedures concurrently on 5 different background workers, so they all start at the same time and run on different workers (I am assuming this is similar to running different Tasks on different Threads in Java). These 5 Stored Procedures are independent (moving records to archive tables), so they can run at the same time. I schedule them each using a command like
cron.schedule (myJob1,
'* * * * *',
'call my_stored_proc_1()'
);
cron.schedule (myJob2,
'* * * * *',
'call my_stored_proc_2()'
);
.
..
...
cron.schedule (myJob5,
'* * * * *',
'call my_stored_proc_5()'
);
NOW, I have some MORE dependent Store Procedures that I want to run. But they need to run AFTER these 5 Jobs finish/complete, because they are doing some DELETE... sql operations.
How can I have this second Stored Procedure (the one doing the DELETE queries) Job run AFTER my first 5 Stored Procedures Jobs when they are DONE? I don't want to set a CRON expression for the second Stored Procedure doing the DELETES, because I don't know what time the first 5 Stored Procs are even going to finish...
Below I included a little schematic of how the Jobs are currently triggered and how I want it to work (if possible):

Preface: how I understand problem
I hope that I understand the problem described by OP.
If I was wrong then it makes everything below invalid.
I suppose that it's about periodic night tasks heavy in CPU and/or IO.
E.g:
there are tasks A-C for archiving data
maybe task D-E for rebuilding aggregates / refreshing mat views
and finally task F that runs reindexing/analyze on whole DB
So it makes sense to run task F only after tasks A-E are finished.
Every task is needed to be run just once in a period of time:
once in a day or hour or week or only during weekends in a night time
it's better not to run in a time when server is under load
Does it fits with OP requirement - IDK.
For the sake of simplicity let's presume that each task runs only once in a night. It's easy to extend for other periods/requirements.
Data-driven approach
1. Add log table
E.g.
CREATE TABLE job_log (
log_id bigint,
job_name text,
log_date timestamptz
)
Tasks A-E
On start
For each job function do check:
IF EXISTS(
SELECT 1 FROM job_log
WHERE
job_name = 'TaskA' # TaskB-TaskE for each functiont
AND log_date::DATE = NOW()::DATE # check that function already executed this night
) OR EXISTS(
SELECT 1 FROM pg_stat_activity
WHERE
query like 'SELECT * FROM jobA_function();' # check that job not executing right now
) THEN RETURN;
END IF;
It's possible that other conditions could be added: look for amount of connections, existence of locks and so on.
This way it will be guaranteed that function will not be executed more frequently than needed.
On finish
INSERT INTO job_log
SELECT
(SELECT MAX(log_id) FROM job_log) + 1 # or use sequences/other autoincrements
,'TaskA'
,NOW()
Cronjob schedule
The meaning of it becames different.
Now it's: "try to initiate execution of task".
It's safe to schedule it for every hour between a chosen period or even more frequently.
Cronjob cannot know if the server is under load or not, are there locks on a table or maybe somebody started execution of task manually.
Job function could be more smart in that.
Task F
Same as above but check on start looks for completion of other tasks.
E.g.
IF NOT EXISTS(
SELECT 1 FROM job_log
WHERE
job_name = 'TaskA'
AND log_date::DATE = NOW()::DATE
) OR NOT EXISTS(
SELECT 1 FROM job_log
WHERE
job_name = 'TaskB'
AND log_date::DATE = NOW()::DATE
)
.... # checks for completions of other tasks
OR EXISTS(
SELECT 1 FROM job_log
WHERE
job_name = 'TaskF' # TaskB-TaskE for each functiont
AND log_date::DATE = NOW()::DATE # check that function already executed this night
) OR EXISTS(
SELECT 1 FROM pg_stat_activity
WHERE
query like 'SELECT * FROM jobF_function();' # check that job not executing right now
) THEN RETURN;
On completion
Write to job_log the same as other functions.
UPDATE. Cronjob schedule
Create multiple schedule in cronjob.
E.g.
Let's say tasks A-E will run approximately 10-15 minutes.
And it's possible that one or two of them could work for 30-45-60 minutes.
Create a schedule for task F to attempt start every 5 minutes.
How that will work:
attempt 1: task A finished, other still working -> exit
attempt 2: task A-C finished -> exit
attempt 3: tasks A-E finished -> start task F
attempt 4: tasks A-E finished but in pg_stat_activity there is an executing task F -> exit
attempt 5: tasks A-E finished, pg_stat_activity is empty but in logs we see that task F already executed -> no need to work -> exit
... all other attempts will be the same till next night
Summary
It's easy extend this approach for any requirements:
another periodicity
or make it unperiodic at all. E.g. make a table with trigger and start execution on change
dependencies of any depth and/or "fuzzy" dependencies
... literally everything
Conception remains the same:
cronjob schedule means "try to run"
decision to run or not is data-driven
I would be glad to hear criticism of any kind - who knows maybe I'm overlooking something.

You could to use pg_stat_activity view to ensure that there are no active query like your jobs 1-5.
Note:
Superusers and members of the built-in role pg_read_all_stats (see also Section 21.5) can see all the information about all sessions
...
while (
select count(*) > 0
from pg_stat_activity
where query in ('call my_stored_proc_1()', 'call my_stored_proc_2()', ...))
loop
perform pg_sleep(1);
perform pg_stat_clear_snapshot(); -- needs to retrieve the fresh data
end loop;
...
Just insert this code at the beginning of your stored proc 6 and call it for a few seconds after the jobs 1-5.
Note 1:
The condition could be simplified and generalized using regexp:
when query ~ 'my_stored_proc_1|my_stored_proc_2|...'
Note 2:
You could to implement timeout using clock_timestamp() function:
...
is_timedout := false;
timeout := '10 min'::interval; -- stop waiting after 10 minutes
start_time := clock_timestamp();
while (...)
loop
perform pg_sleep(1);
perform pg_stat_clear_snapshot(); -- needs to retrieve the fresh data
if clock_timestamp() - start_time > timeout then
is_timedout := true;
break;
end if;
end loop;
if is_timedout then
...
else
...
end if;
...
Note 3:
Look at the other columns of the pg_stat_activity. You may need to use them as well.

Related

FOR LOOP without a transaction

We am doing a system redesigning and due to the change in design we need to import data from multiple similar source tables into one table. For this same, I am running a loop which have the list of tables and importing all the data. However, due to massive amount of data, I got out of memory error after execution of around 12 hours and 20 tables. Now I discovered that the loop runs in a single transaction which I don't need since the system which is filling the data is suspended for that time. Having this transaction thing, I believe, it is taking longer time also. My requirement is to run my query without any transaction.
DO $$DECLARE r record;
BEGIN
FOR r IN SELECT '
INSERT INTO dbo.tb_requests
(node_request_id, request_type, id, process_id, data, timestamp_d1, timestamp_d2, create_time, is_processed)
SELECT lpad(A._id, 32, ''0'')::UUID, (A.data_type + 1) request_type, B.id, B.order_id, data_value, timestamp_d1, timestamp_d2, create_time, TRUE
FROM dbo.data_store_' || id || ' A
JOIN dbo.tb_new_processes B
ON A.process_id = B.process_id
WHERE A._id != ''0'';
' as log_query FROM dbo.list_table
ORDER BY line_id
LOOP
EXECUTE r.log_query;
END LOOP;
END$$;
This is a sample code block. It is not the actual code block but I think, it will give the idea.
Error Message(Translation from Original Japanese error Message):
ERROR: Out of memory
DETAIL: Request for size 32 failed in memory context "ExprContext".
SQL state: 53200
You cannot to run any statement on server side without transaction. For some modern Postgres releases you can run commit statement inside DO statement. It is closes current transaction and starts new transactions. This can breaks very long transaction, and can solve the problem with memory leak - Postgres releasing some memory at transaction end.
Or use shell scripts instead (bash) if it is possible.

How to force PostgreSQL function run sequentially

I have a PostgreSQL function A.
Many clients will call A:
- client X1 send query 1 "SELECT A();" then
- client X2 send query 2 "SELECT A();" then
- client X3 send query 3 "SELECT A();" then
...
How to force function A to run sequentially?
Mean that force: query 1 run --> finish or timeout --> query 2 run --> finish or timeout --> query run --> finish or timeout ... (not allow query 1 and query 2 run simultaneously)
Use advisory locks.
The first command in the function body should be (1234 is an exemplary integer constant):
perform pg_advisory_xact_lock(1234);
When two concurrent sessions call the function, one of them will wait until the function in the second one completes. This is a transaction-level advisory lock, automatically released when a transaction terminates.
Alternatively, you can use a session-level advisory lock, which can (should) be manually released:
create function example()
returns void language plpgsql as $$
begin
perform pg_advisory_lock(1234);
--
-- function's commands
--
perform pg_advisory_unlock(1234);
end $$;
Any advisory lock obtained in a session is automatically released at the end of the session (if it hasn't been released earlier).
I think the full answer is use pg_advisory_xact_lock + idle_in_transaction_session_timeout in function A.
Query 2 will wait until query 1 is completed (lock auto released) or timeout (session auto killed and lock also auto released).

Postgresql: How to repeat query as soon as finished?

Let's say I have a query like so:
SELECT * FROM a WHERE a.Category = 'liquid' ORDER BY a.MeasurementTime DESC;
and I want to see the results coming into the database 'live'.
How can I write a query for Postresql which will repeat as soon as the query finishes?
You can use the \watch n command in the terminal to re-execute the query every n seconds.
Example:
postgre=# SELECT * FROM TABLE WHERE CONDITION
postgre=# \watch 5
-- now the "SELECT * FROM TABLE WHERE CONDITION" is re-executed every 5 seconds
You can't see them 'live'. Queries complete before returning to calling environment.
You could wrap this in a cron job ( depending on your environment ) or similar scheduler and have them run every minute, or a function and add that to pgagent to be run every minute.
To have a dml statement constantly running is not really a good idea and i would not recommend it for performance and table management purposes.
however...
Within a function you can create a loop with a wait clause using pg_sleep and just no break clause, but really a job is the best way to go.
watch -n1 'psql -h {ip} {db} {user} -c "select * from condition;"'
Make sure that you set the password of the {user} inside an environment variable:
Linux> export PGPASSWORD="password"

How to implement a distributed job queue using PostgreSQL?

We have a few different tasks (eg. process image, process video) that are created when a user uploads media. The concept we have at the moment is to have a primary Task that is the container for all task types, and a Subtask which has all the metadata required to process the task.
I've seen countless cases where the answer is to use Redis, but we would like to keep a task history to calculate things like average task time, and the metadata of subtasks can be complex.
I'm not very experienced with PostgreSQL so see this as pseudocode:
BEGIN TRANSACTION;
-- Find an unclaimed task.
-- Claim the task.
-- Prevent another worker from also claiming this task.
UPDATE (
SELECT FROM subtasks
INNER JOIN tasks
ON tasks.id = subtasks.id
WHERE tasks.started_at = NULL -- Not claimed
ORDER BY tasks.created_at ASC -- First in, first out
LIMIT 1
FOR UPDATE SKIP LOCKED -- Don't wait for it, keep looking.
)
SET tasks.started_at = now()
RETURNING *
-- Use metadata from the subtask to perform the task.
-- If an error occurs we can roll back, unlocking the row.
-- Will this also roll back if the worker dies?
-- Mark the task as complete.
UPDATE tasks
SET completed_at = now()
WHERE tasks.id = $id
END TRANSACTION;
Will this work?
Edit 1: Using clock_timestamp() and no subselect.
BEGIN TRANSACTION
-- Find an unclaimed task.
SELECT FROM subtasks
INNER JOIN tasks
ON tasks.id = subtasks.id
WHERE tasks.started_at = NULL -- Not claimed
ORDER BY tasks.created_at ASC -- First in, first out
LIMIT 1 -- We only want to select a single task.
FOR UPDATE SKIP LOCKED -- Don't wait for it, keep looking.
-- Claim the task.
UPDATE tasks
SET started_at = clock_timestamp()
WHERE id = $taskId
-- Use metadata from the subtask to perform the task.
-- If an error occurs we can roll back, unlocking the row.
-- Will this also roll back if the worker dies?
-- Task is complete.
-- Mark the task as complete.
UPDATE tasks
SET completed_at = clock_timestamp()
WHERE id = $taskId
END TRANSACTION

SQL Server 2005 Killing Query by time

I have come across with a doubt regarding stopping (killing) a query:
There is a procedure that changes the amount of data that a subscriber replicates (I don't know all the details, it is some functionality that has been already implemented by the company), it's in a transaction, so if it not finished it will rollback, while performing this procedure the replication for all subscribers is blocked, that is why we perform this operation during the night when the amount of subscribers replicating will be less or none. It is the weekend and I want to leave running the procedure (Friday 10pm) but I would like it to rollback if it doesn't finish e.g. at 6am on Saturday and without the need to go to the office in order to stop the procedure manually.
setting it to run at 10pm is easy I have used
waitfor time '22:00'
I'm aware that in the same query can't be a script that can stop the whole query since it is "sequential", is there a way to do it opening other query tab? I hope that creating a job is not the only solution (if it is a solution at all).
Thank you for your replies.
I suggest the easiest way to handle this is by putting your long-running process into a job. Doesn't require an open instance of Management Studio with a reliably active network connection to the server from your workstation, and since a job can only be running one instance at a time, it will be much easier work identifying the actual process that is running the job (and deal with it accordingly).
So let's say I've convinced you, and you have a job called "Job I wanna kill" that is scheduled to run Friday night at 10 PM. The following stored procedure can be scheduled as a separate job, Saturday morning at 6:00 AM, or called manually.
CREATE PROCEDURE dbo.Kill_Job_I_Wanna_Kill
AS
BEGIN
SET NOCOUNT ON;
DECLARE #id UNIQUEIDENTIFIER;
-- since the job could be dropped and re-created:
SELECT #id = job_id
FROM msdb.dbo.sysjobs
WHERE name = 'Job I wanna kill';
-- note that it could also be renamed, so you'll have to
-- decide how to properly identify this job in the long run
DECLARE #t TABLE
(
ID VARBINARY(32), rd INT, rt INT, nrd INT,
nrt INT, nrs INT, rr INT, rs INT, rsID SYSNAME,
Running BIT, cs INT, cra INT, [state] INT
);
-- note that this XP is undocumented and unsupported!
INSERT #t EXEC master.dbo.xp_sqlagent_enum_jobs 1, 'sa', #id;
IF EXISTS (SELECT 1 FROM #t WHERE Running = 1)
BEGIN
PRINT 'Cancelling job!';
EXEC msdb.dbo.sp_stop_job #job_id = #id;
END
ELSE
BEGIN
PRINT 'Job is not running!'
END
END
GO
When the job is killed successfully, the following will be seen as a printout when called manually or in the job step history when scheduled:
Cancelling job!
Job 'Job I wanna kill' stopped successfully.
Now, there could be other complications - sometimes a rollback can take just as long as (or longer than) the time it took to get to that point. It all depends on what the long-running process is doing (I'm going to assume you're rebuilding / reorganizing indexes).