I am implementing a lightweight jobs system in my highly-concurrent application. For simplicity, I will be using Postgres to manage the state of all jobs in the system.
I have a table where processes can mark specific jobs as "running" via a boolean flag is_running. If multiple processes attempt to run the same job, only one of the processes should "win".
I want to have an atomic statement that the application processes will call in an attempt to exclusively mark the job as running.
UPDATE jobs SET is_running = true WHERE is_running = false AND id = 1
If there a is a single row with (id=1, is_running=false), and multiple processes attempt to execute the above statement, is it possible for more than one process to actually set is_running=true?
I only want one of the processes to see an updated row count of 1 - all other processes should see an updated row count of 0.
Your approach is safe and free from race conditions, because PostgreSQL will reevaluate the WHERE condition after it had to wait for a lock caused by a concurrent modification. It will then see the changed value and skip the row.
Related
I am running multiple background worker processes in my Postgres database. I am doing this using Pg_Cron extension. I cannot unfortunately use Pg_Timetables, as suggested by another user here.
Thus, I have 5 dependent "Jobs" that need 1 other independent Procedure/Function to execute and complete before they can start. I originally having my Cron jobs simply check every 30minutes or-so some "job_log" table I created to see if the independent Job completed (i.e. if yes, execute Procedure, if not, return out of Procedure and check at next Cron interval)
However, I believe I could simplify the way I am triggering/orchestrating all these Jobs/Procedures greatly if I utilize pg_sleep and start all the Jobs at one -time (so no more checking every 30minutes). I would be running these Jobs in the night time concurrently so I believe it shouldn't effect my actual traffic that much.
i.e.
WHILE some_variable != some_condition LOOP
PERFORM pg_sleep(1);
some_variable := some_value; -- update variable here
END LOOP;
My question is
Would starting all these Jobs at one time (i.e. Setting a concrete time in the Cron expression e.g. 15 18 * * *), and utilizing pg_sleep be bad practice/inefficient as I would be idling 5 background workers while 1 Job completes. The 1 job these are dependent on could take any amount of time to finish i.e. 15 min, 30 min, 1hr (should be < 1 hr though).
Or is better to simply just use a Cron expression to check every 5min or so if the main/independent Job is done, so my other Jobs that are dependent can then run?
Running two schedulers, one of them home-built, seems more complex than just running one scheduler that does 2 (or 6, 1+5, however you count it) different things. If your goal is to make things simpler, does your proposal really achieve that?
I wouldn't worry about 5 backends sleeping on pg_sleep at the same time. But you might worry about them holding back the xid horizon while they do so, which would make vacuuming and HOT pruning less effective. But if you already have one long-running task in one snapshot (the thing they are waiting for) then more of them aren't going to make the matter worse.
What is the best practice or way to process the records from DB in scheduled.
Situation:
A Microservice based on Quarkus - responsible for sending a communication to customers.
DB Table Having Customers Records (100000 customers)
Microservice is running on multiple nodes (4 nodes)
Expectation:
There should be a scheduler that runs every 5 sec
Fetches the records from DB where employee status = pending
Should be Multithreaded architecture.
Send email to employee email.
Problem 1:
The same scheduler running on multiple nodes picks the same records and process How can we avoid this?
Problem 2:
Scheduler pics (100 records and processing it) and takes more than 5 seconds and scheduler run again pics few same records. How can we avoid that:
If you are planning to run your microservices on kubernetes I would sugest to use an external components as a scheduler and let this component distribute the work over your microservices using messages or HTTP invocations.
As responses to your questions here we go:
You can use some locking strategy or "reserve" each row including a field that indicates that your record is being processed and excluding all records containing this fields from your query. By this means when the scheduler fires it will read a set of rows not reserved and use a multithreading approach to process the records, by using a locking strategy (pesimits or optimist) you can prevent other records from marking the same row as reserved for them to be processed. After that the thread thas was able to commit the reserve process the records and updates the state or releases the "reserve" so other workers can work on the record if its needed.
You can always instruct your scheduler to do no execute if there is still an execution going.
#Scheduled(identity = "ProcessUpdateScheduler", every = "2s", concurrentExecution = Scheduled.ConcurrentExecution.SKIP)
You mainly have two approaches among other possible ones:
Pulling (Distribute mining or work distribution): Each instance of the microservice pick a random pending row and mark this row as "processing" commiting the transaction, if its able to commit then this instance holds the right to process this record continuing with its execution, if not it tries to retrieve a different row or just exists waiting for the next invocation. This approach scales horizontally because adding more workers will mean increasing your processing throughput.
Pushing (central distribution, distributed processing). You have two kinds of components: First the "Distributor" which is executed with the scheduler and is responsible for picking rows to be processed and marking then as "pending processing", this rows will be forward via a messaging system or HTTP call to the "Processor". The Processor component recieves as input a record and is responsible of processing this record completely or releasing the hold ("procesing pending") state.
Choouse the best suited for your scenario, if you go for the second option, you can have one or more distributors if its necessary, but in order to increment your processing throughput you only need to scale the "Processor" workers
Is there any way to delay a Celery task from running based on a condition? Before it moves from scheduled to active I would like to perform a quick check to see if my machine can run the task based on the arguments provided and my machine's state at the time. If it's not, it halts the scheduled queue and waits until the condition is satisfied.
I've looked around at the following points but it didn't seem to cut it:
Celery's Signals: closest thing I could get to is task_prerun() but regardless of what I put in there, the task will get run and it doesn't halt the other scheduled tasks from running. There's also worker_ready() but that doesn't look at the upcoming task's arguments to do the check.
Database Lock (also here as well): I can have each of the tasks start running normally and then do the check at the beginning of the task's run but if I set a periodic interval to check if condition is met, I lose the order of the active queue as condition can be met at any point and one of the many active tasks will be able to continue. This is where the database lock comes in and is so far the most feasible solution. I can make a lock every time I do the check and if the condition's not met, it stays locked. When the condition's finally met, I release the lock for the next item in the queue, preserving the queue's original order.
I find it surprising that celery doesn't have this functionality to specify if/when the next item in the scheduled queue is ready to be run.
Say I am using postgres for job queueing. Meaning I have a table like:
table_jobs:
-id
-name
-taskId
-created_date
-status (open, processing, completed)
So there will be multiple processes reading this table, and they need to lock each row. Is there a way to prevent concurrency issues i.e. multiple processes reading a table and owning a job that is already taken?
There is SELECT ... FOR UPDATE feature:
FOR UPDATE causes the rows retrieved by the SELECT statement to be locked as though for update. This prevents them from being modified or deleted by other transactions until the current transaction ends. That is, other transactions that attempt UPDATE, DELETE, or SELECT FOR UPDATE of these rows will be blocked until the current transaction ends.
One possible way of implementing your queue is:
Worker process runs SELECT ... WHERE status = 'open' FOR UPDATE.
Worker process runs UPDATE ... WHERE id IN (...) with IDs it got from previous step.
Worker process does its stuff.
Worker process update tasks statuses to completed.
I'm using the Perl client of beanstalkd. I need a simple way to not enqueue the same work twice.
I need something that needs to basically wait until there are K elements, and then groups them together. To accomplish this, I have the producer:
insert item(s) into DB
insert a queue item into beanstalkd
And the consumer:
while ( 1 ) {
beanstalkd.retrieve
if ( DB items >= K )
func_to_process_all_items
kill job
}
This is linear in the number of requests/processing, but in the case of:
insert 1 item
... repeat many times ...
insert 1 item
Assuming all these insertions happened before a job was retrieved, this would add N queue items, and it would do something as such:
check DB, process N items
check DB, no items
... many times ...
check DB, no items
Is there a smarter way to do this so that it does not insert/process the later job requests unnecessarily?
I had a related requirement. I only wanted to process a specific job once within a few minutes, but the producer could queue several instances of the same job. I used memcache to store the job identifier and set the expiry of the key to just a few minutes.
When a worker tried to add the job identifier to memcache, only the first would succeed - on failure to add the job id, the worker would delete the job. After a few minutes, the key expires from memcache and the job can be processed again.
Not particularly elegant, but it works.
Will this work for you?:
Create two Tubes "buffer" and "live". Your producer always only adds to the "buffer" tube.
Create two workers one watches the "buffer" and the other watches the "live" that call the blocking reserve() call
Whenever the "buffer" worker returns on reserve, it buries the job if there are less than K items. If there are exactly K, then it "kicks" all K jobs and transfers them to the "live" tube.
The "live" watcher will now return on its own reserve()
You just need to take care that a job does not ever return to the buffer queue from the buried state. A failsafe way to do this might be to delete it and then add it to live.
The two separate queues are only for cleaner separation. You could do the same with a single queue by burying everyjob until there are K-1 and then on the arrival of the K-th job, kicking all of them live.