How to handle searchkick async reindex and promotion - searchkick

Looking at the parallel reindexing section of the readme, it says Once the jobs complete, promote the new index ....
I'm running a scheduled sidekiq task that begins the reindex. I have a check that looks like this:
def searchkick_index_complete?(index)
status = Searchkick.reindex_status(index).fetch(:complete)
end
The check runs periodically after a reindex is triggered and promotes if completed.
Is there a recommended way to check for reindex completion and then promote?

Related

Impact of unscheduling over the running job using Quartz?

There are some jobs scheduled using any trigger either SimpleTrigger or CronTrigger, now want to unschedule and delete the jobs. The job can be in running or already completed its execution state. If a unschedule or already executed job is deleted then there won't be any worst impact but what happen to the running job, if unschedule using unscheduleJob() or deleted directly by deleteJob() methods of the Quartz?
And if the running job is being halted in-between when the unscheduleJob() or deleteJob() is called upon then is there any way to let the job to complete it's current execution before unscheduling or deleting to avoid any malfunctioning or bad data?
Tried to check the conflicting jobs and make use of SchedulerListener also but didn't get any information.
Thanks in Advance!!!

ADF Scheduling when existing Job not yet finished

Having read https://learn.microsoft.com/en-us/azure/data-factory/v1/data-factory-scheduling-and-execution, it is unclear to me if:
A schedule is made every hr for a job to run,
can we stop the concurrent execution of the next job at hr+1 if the job for hr+0 is still running?
It looks if concurrency = 1 means this,
But is that invocation simply not start until concurrent execution is finished?
Or will it be discarded?
When we set the concurrency 1, only one instance will be allowed to run at a time. When the scheduled trigger runs again and tries to run the pipeline, If the pipeline is already running, the next invocation will be queued. It will start after finishing the current instance.
For your question, the following invocation will be queued. After the first run finishes, the next run will start.

Github Actions Concurrency Queue

Currently we are using Github Actions for CI for infrastructure.
Infrastructure is using terraform and a code change on a module triggers plan and deploy for changed module only (hence only updates related modules, e.g 1 pod container)
Since auto-update can be triggered by another github repository push they can come relatively on same time frame, e.g Pod A Image is updated and Pod B Image is updated.
Without any concurrency in place, since terraform holds lock, one of the actions will fail due to lock timeout.
After implementing concurreny it is ok for just 2 on same time pushes to deploy as second one can wait for first one to finish.
Yet if there are more coming, Githubs concurreny only takes into account last push for queue and cancels waiting ones (in progress one can still continue). This is logical from single app domain perspective but since our Infra code is using difference checks, by passing deployments on canceled job actually bypasses and deployment!.
Is there a mechanism where we can queue workflows (or even maybe give a queue wait timeout) on Github Actions ?
Eventually we wrote our own script in workflow to wait for previous runs
Get information on current run
Collect previous non completed runs
and wait until completed (in a loop)
If exited waiting loop continue
on workflow
Tutorial on checking status of workflow jobs
https://www.softwaretester.blog/detecting-github-workflow-job-run-status-changes

how to operate in airflow so that the task rerun and continue downstream tasks

I set up a work flow in airflow, one of jobs was failed, after I fixed the problem, I want to rerun the failed task and continue the workflow.the details like below:
as above, I prepared to run the task "20_validation", I pressed the button 'Run' like below:
but the problem is when the task '20_validation' has finished, these downstream tasks were not continue to be started. How should I do?
Use the clear button directly below the run button you drew the box around in your screenshot.
This will clear the failed task state and all the tasks after (since downstream is selected to the right), causing them all to be run/rerun.
Since the upstream task failed, the downstream tasks shouldn't have been run anyway, so it shouldn't cause successful tasks to rerun.

Why eclipse "delete" and File "new" will block the UI while waiting, wheraes other jobs which I created won't block but will be in waiting state?

I will schedule some job first for file creation under the project. (Refering "On the Job: The Eclipse Jobs API" article example.)
Scheduling rule used is job.setRule(ResourcesPlugin.getWorkspace().getRoot()).This means job will acquire a lock on workspace root itself. And any other operations which I perform like "delete" or File menu "new" project creation will go to waiting state.
But why eclipse "delete" or File "new" operation will block my entire UI, wheraes the jobs which I created will only goes to waiting state when I acquire lock on workspace root?
Can I able to implement my own "delete" operation where like any other jobs will go to waiting state but not block the UI when some other job is already running and given the same scheduling rule?
The File New and Delete operations don't use the Jobs API but they do wait for access to the workspace so they can block the UI until it is available.
You could write New and Delete operations that use the Jobs API so that the operations run in the background.