Snowflake event based TASK Scheduling instead of Time Based task scheduling - snowflake-task

I'm in need of examples of Snowflake event based TASK Scheduling instead of Time Based task scheduling.
I could not find those examples in snowflake document.
thanks in advance

The only event source that can trigger a task is the completion of a prior task in a task tree, see using the "AFTER" parameter.
CREATE TASK mytask2
WAREHOUSE = mywh
AFTER mytask1
AS
INSERT INTO mytable2(id,name) SELECT id, name FROM mytable1;
Also, if the event would be an insert or a change to a record in a table, you can create a stream on the table and use the WHEN clause to keep the scheduled task from running until the stream has data.
create stream mystream on table mytable
APPEND_ONLY = TRUE; // Set to true to only capture inserts
CREATE TASK mytask1
WAREHOUSE = mywh
SCHEDULE = '5 minute'
WHEN
SYSTEM$STREAM_HAS_DATA('MYSTREAM')
AS
INSERT INTO mytable1(id,name) SELECT id, name FROM mystream WHERE METADATA$ACTION = 'INSERT';
https://docs.snowflake.com/en/sql-reference/sql/create-task.html
https://docs.snowflake.com/en/sql-reference/sql/create-stream.html

There is no event source that can trigger a task; instead, a task runs on a schedule.
https://docs.snowflake.com/en/user-guide/tasks-intro.html#task-scheduling
So it's not possible to create an event based task scheduling for now.

Related

Prevent Hasura event trigger from creating new pending events

I'm attempting to update a table in a PostgreSQL database that has ~2.3 million rows. We also have an event trigger associated with this table which is supposed to run a microservice to perform further calculations whenever a row is updated/inserted/deleted.
As expected, the first time I updated the table, this led to the creation of over 2 million pending events. At the rate of a few thousand events cleared an hour, I don't have the option to wait for all events to be processed.
I'm looking to update the data in the table without the event trigger creating any pending events. Things I've tried:
deleting the event trigger, updating the table and then re-creating the event trigger. While we didn't have any pending events at first, all of them reappeared as soon as the event trigger was recreated.
manipulating the table storing the event logs itself to manually delete all pending events created in the last 2 days (following the Hasura docs here).
DELETE FROM hdb_catalog.event_invocation_logs
WHERE event_id IN (
SELECT id FROM hdb_catalog.event_log
WHERE trigger_name = 'my_trigger_name'
AND delivered = false
AND created_at > now() - interval '2 days');
The above would only delete few tens of events each time, and then finish running for some reason.
Before I try deleting all event logs as a last resort, I was wondering if it's safe to do so:
DELETE FROM hdb_catalog.event_invocation_logs;
DELETE FROM hdb_catalog.event_log;
Any help is appreciated, thanks.
How are you actually executing the update statement which is touching 2.3 million rows? Are you just running it using SQL directly?
If so, you can wrap your statements like so:
SET session_replication_role = replica;
UPDATE table SET thing = 'whatever';
SET session_replication_role = DEFAULT;
Triggers do not execute when in replica mode

How to run Postgres pg_cron Job AFTER another Job?

I running some automated tasks on my postgres database at night using the pg_cron extension. I am moving certain old records to archive database tables. I am running 5 Stored Procedures concurrently on 5 different background workers, so they all start at the same time and run on different workers (I am assuming this is similar to running different Tasks on different Threads in Java). These 5 Stored Procedures are independent (moving records to archive tables), so they can run at the same time. I schedule them each using a command like
cron.schedule (myJob1,
'* * * * *',
'call my_stored_proc_1()'
);
cron.schedule (myJob2,
'* * * * *',
'call my_stored_proc_2()'
);
.
..
...
cron.schedule (myJob5,
'* * * * *',
'call my_stored_proc_5()'
);
NOW, I have some MORE dependent Store Procedures that I want to run. But they need to run AFTER these 5 Jobs finish/complete, because they are doing some DELETE... sql operations.
How can I have this second Stored Procedure (the one doing the DELETE queries) Job run AFTER my first 5 Stored Procedures Jobs when they are DONE? I don't want to set a CRON expression for the second Stored Procedure doing the DELETES, because I don't know what time the first 5 Stored Procs are even going to finish...
Below I included a little schematic of how the Jobs are currently triggered and how I want it to work (if possible):
Preface: how I understand problem
I hope that I understand the problem described by OP.
If I was wrong then it makes everything below invalid.
I suppose that it's about periodic night tasks heavy in CPU and/or IO.
E.g:
there are tasks A-C for archiving data
maybe task D-E for rebuilding aggregates / refreshing mat views
and finally task F that runs reindexing/analyze on whole DB
So it makes sense to run task F only after tasks A-E are finished.
Every task is needed to be run just once in a period of time:
once in a day or hour or week or only during weekends in a night time
it's better not to run in a time when server is under load
Does it fits with OP requirement - IDK.
For the sake of simplicity let's presume that each task runs only once in a night. It's easy to extend for other periods/requirements.
Data-driven approach
1. Add log table
E.g.
CREATE TABLE job_log (
log_id bigint,
job_name text,
log_date timestamptz
)
Tasks A-E
On start
For each job function do check:
IF EXISTS(
SELECT 1 FROM job_log
WHERE
job_name = 'TaskA' # TaskB-TaskE for each functiont
AND log_date::DATE = NOW()::DATE # check that function already executed this night
) OR EXISTS(
SELECT 1 FROM pg_stat_activity
WHERE
query like 'SELECT * FROM jobA_function();' # check that job not executing right now
) THEN RETURN;
END IF;
It's possible that other conditions could be added: look for amount of connections, existence of locks and so on.
This way it will be guaranteed that function will not be executed more frequently than needed.
On finish
INSERT INTO job_log
SELECT
(SELECT MAX(log_id) FROM job_log) + 1 # or use sequences/other autoincrements
,'TaskA'
,NOW()
Cronjob schedule
The meaning of it becames different.
Now it's: "try to initiate execution of task".
It's safe to schedule it for every hour between a chosen period or even more frequently.
Cronjob cannot know if the server is under load or not, are there locks on a table or maybe somebody started execution of task manually.
Job function could be more smart in that.
Task F
Same as above but check on start looks for completion of other tasks.
E.g.
IF NOT EXISTS(
SELECT 1 FROM job_log
WHERE
job_name = 'TaskA'
AND log_date::DATE = NOW()::DATE
) OR NOT EXISTS(
SELECT 1 FROM job_log
WHERE
job_name = 'TaskB'
AND log_date::DATE = NOW()::DATE
)
.... # checks for completions of other tasks
OR EXISTS(
SELECT 1 FROM job_log
WHERE
job_name = 'TaskF' # TaskB-TaskE for each functiont
AND log_date::DATE = NOW()::DATE # check that function already executed this night
) OR EXISTS(
SELECT 1 FROM pg_stat_activity
WHERE
query like 'SELECT * FROM jobF_function();' # check that job not executing right now
) THEN RETURN;
On completion
Write to job_log the same as other functions.
UPDATE. Cronjob schedule
Create multiple schedule in cronjob.
E.g.
Let's say tasks A-E will run approximately 10-15 minutes.
And it's possible that one or two of them could work for 30-45-60 minutes.
Create a schedule for task F to attempt start every 5 minutes.
How that will work:
attempt 1: task A finished, other still working -> exit
attempt 2: task A-C finished -> exit
attempt 3: tasks A-E finished -> start task F
attempt 4: tasks A-E finished but in pg_stat_activity there is an executing task F -> exit
attempt 5: tasks A-E finished, pg_stat_activity is empty but in logs we see that task F already executed -> no need to work -> exit
... all other attempts will be the same till next night
Summary
It's easy extend this approach for any requirements:
another periodicity
or make it unperiodic at all. E.g. make a table with trigger and start execution on change
dependencies of any depth and/or "fuzzy" dependencies
... literally everything
Conception remains the same:
cronjob schedule means "try to run"
decision to run or not is data-driven
I would be glad to hear criticism of any kind - who knows maybe I'm overlooking something.
You could to use pg_stat_activity view to ensure that there are no active query like your jobs 1-5.
Note:
Superusers and members of the built-in role pg_read_all_stats (see also Section 21.5) can see all the information about all sessions
...
while (
select count(*) > 0
from pg_stat_activity
where query in ('call my_stored_proc_1()', 'call my_stored_proc_2()', ...))
loop
perform pg_sleep(1);
perform pg_stat_clear_snapshot(); -- needs to retrieve the fresh data
end loop;
...
Just insert this code at the beginning of your stored proc 6 and call it for a few seconds after the jobs 1-5.
Note 1:
The condition could be simplified and generalized using regexp:
when query ~ 'my_stored_proc_1|my_stored_proc_2|...'
Note 2:
You could to implement timeout using clock_timestamp() function:
...
is_timedout := false;
timeout := '10 min'::interval; -- stop waiting after 10 minutes
start_time := clock_timestamp();
while (...)
loop
perform pg_sleep(1);
perform pg_stat_clear_snapshot(); -- needs to retrieve the fresh data
if clock_timestamp() - start_time > timeout then
is_timedout := true;
break;
end if;
end loop;
if is_timedout then
...
else
...
end if;
...
Note 3:
Look at the other columns of the pg_stat_activity. You may need to use them as well.

Pull the data on a daily basis

I have data in my redshift cluster, What is the best way to pull the data on daily basis from redshift and create a new new table YY in redshift basis of few sql queries.
Like we have a table XX in redshift and i want to create a table in redshift from pull the top 10 rows from table XX
Create table YY as Select top 10 * from XX
Using AWS-Glue you could schedule the Job and then write the the scripts code to do specific things. AWS-glue code could be triggered on following 3 type of events, in your case I think #1 is applicable.
A trigger that is based on a cron schedule.
A trigger that is event-based; for example, the successful completion of another job can start an AWS Glue job.
A trigger that starts a job on demand.
For your case in my opinion, this should be more applicable.
I hope this should give you some pointers.

Implementing a work queue in postresql

I want to make a persistent job queue in postgresql. So that multiple workers can select one job from the queue (using select for update with skip locked), process it and than delete it from the queue. I have a table:
create table queue (
id serial primary key
some_job_param1 text not null
some_job_param2 text not null
)
Now if there are two jobs then it works fine:
the worker1 starts a transaction and selects the first job
begin;
select * from queue for update skip locked limit 1;
starts the processing. worker2 does the same thing and selects the second job with the same query.
After worker1 does it's job, deletes it from the queue and commits the transaction:
delete from queue where id=$1;
commit;
Then the worker1 is ready for a new job, so it does the same thing. Begins a new transaction, selects for a job that isn't locked. But the problem is, that there are no more jobs, the query returns zero rows.
Ideal would be if the query would block until there is a new job and it returns a result. Is it somehow possible? Or I'm going in a wrong direction?
EDIT:
the workers are an external process. So if the worker died, the session dies and the transaction also. Then the selected job will be locked no more and ready for another worker. The pseudo code would look like this:
while (true) {
tx = db.new_transaction()
job_id, param1, param2 = tx.query("select * from queue for update skip locked limit 1")
try {
some_long_processing(param1, param2)
tx.commit()
} catch {
tx.rollback()
}
}

How to implement a distributed job queue using PostgreSQL?

We have a few different tasks (eg. process image, process video) that are created when a user uploads media. The concept we have at the moment is to have a primary Task that is the container for all task types, and a Subtask which has all the metadata required to process the task.
I've seen countless cases where the answer is to use Redis, but we would like to keep a task history to calculate things like average task time, and the metadata of subtasks can be complex.
I'm not very experienced with PostgreSQL so see this as pseudocode:
BEGIN TRANSACTION;
-- Find an unclaimed task.
-- Claim the task.
-- Prevent another worker from also claiming this task.
UPDATE (
SELECT FROM subtasks
INNER JOIN tasks
ON tasks.id = subtasks.id
WHERE tasks.started_at = NULL -- Not claimed
ORDER BY tasks.created_at ASC -- First in, first out
LIMIT 1
FOR UPDATE SKIP LOCKED -- Don't wait for it, keep looking.
)
SET tasks.started_at = now()
RETURNING *
-- Use metadata from the subtask to perform the task.
-- If an error occurs we can roll back, unlocking the row.
-- Will this also roll back if the worker dies?
-- Mark the task as complete.
UPDATE tasks
SET completed_at = now()
WHERE tasks.id = $id
END TRANSACTION;
Will this work?
Edit 1: Using clock_timestamp() and no subselect.
BEGIN TRANSACTION
-- Find an unclaimed task.
SELECT FROM subtasks
INNER JOIN tasks
ON tasks.id = subtasks.id
WHERE tasks.started_at = NULL -- Not claimed
ORDER BY tasks.created_at ASC -- First in, first out
LIMIT 1 -- We only want to select a single task.
FOR UPDATE SKIP LOCKED -- Don't wait for it, keep looking.
-- Claim the task.
UPDATE tasks
SET started_at = clock_timestamp()
WHERE id = $taskId
-- Use metadata from the subtask to perform the task.
-- If an error occurs we can roll back, unlocking the row.
-- Will this also roll back if the worker dies?
-- Task is complete.
-- Mark the task as complete.
UPDATE tasks
SET completed_at = clock_timestamp()
WHERE id = $taskId
END TRANSACTION