Why eclipse "delete" and File "new" will block the UI while waiting, wheraes other jobs which I created won't block but will be in waiting state? - eclipse

I will schedule some job first for file creation under the project. (Refering "On the Job: The Eclipse Jobs API" article example.)
Scheduling rule used is job.setRule(ResourcesPlugin.getWorkspace().getRoot()).This means job will acquire a lock on workspace root itself. And any other operations which I perform like "delete" or File menu "new" project creation will go to waiting state.
But why eclipse "delete" or File "new" operation will block my entire UI, wheraes the jobs which I created will only goes to waiting state when I acquire lock on workspace root?
Can I able to implement my own "delete" operation where like any other jobs will go to waiting state but not block the UI when some other job is already running and given the same scheduling rule?

The File New and Delete operations don't use the Jobs API but they do wait for access to the workspace so they can block the UI until it is available.
You could write New and Delete operations that use the Jobs API so that the operations run in the background.

Related

Github Actions Concurrency Queue

Currently we are using Github Actions for CI for infrastructure.
Infrastructure is using terraform and a code change on a module triggers plan and deploy for changed module only (hence only updates related modules, e.g 1 pod container)
Since auto-update can be triggered by another github repository push they can come relatively on same time frame, e.g Pod A Image is updated and Pod B Image is updated.
Without any concurrency in place, since terraform holds lock, one of the actions will fail due to lock timeout.
After implementing concurreny it is ok for just 2 on same time pushes to deploy as second one can wait for first one to finish.
Yet if there are more coming, Githubs concurreny only takes into account last push for queue and cancels waiting ones (in progress one can still continue). This is logical from single app domain perspective but since our Infra code is using difference checks, by passing deployments on canceled job actually bypasses and deployment!.
Is there a mechanism where we can queue workflows (or even maybe give a queue wait timeout) on Github Actions ?
Eventually we wrote our own script in workflow to wait for previous runs
Get information on current run
Collect previous non completed runs
and wait until completed (in a loop)
If exited waiting loop continue
on workflow
Tutorial on checking status of workflow jobs
https://www.softwaretester.blog/detecting-github-workflow-job-run-status-changes

Should we unschedule Sling Jobs running within AEM after they are completed?

I am creating multiple SlingJobs on the fly using org.apache.sling.commons.scheduler.Scheduler OSGi service in AEM.
i.e. scheduler.schedule(Runnable, ScheduleOptions);
I have requirement that these Sling Jobs be run only once, so I am using ScheduleOptions.AT(Date date,int times,long period) ScheduleOptions Docs
And passing times=1 as a parameter.
(Also what is period parameter ?)
The Job successfully runs only once.
My question is am I supposed to keep a track of this Job by name and UnSchedule it using Scheduler.unschedule(String jobName) after it has finished running ?
Will completed SlingJobs that are not UnScheduled, consume memory in the AEM server ?
Will these completed BUT unscheduled jobs cause my AEM server to slow down and later on require some purge activity as maintenance?
According to https://sling.apache.org/documentation/bundles/apache-sling-eventing-and-job-handling.html#scheduled-jobs
Internally the scheduled Jobs use the Commons Scheduler Service. But in addition they are persisted (by default below /var/eventing/scheduled-jobs) and survive therefore even server restarts. When the scheduled time is reached, the job is automatically added as regular Sling Job through the JobManager.
I had a problem with a scheduled jobs before(they were triggered on the daily basis). When the server was restarted scheduled jobs wasn't un-persisted and a new job doing the same action was scheduled(job was scheduled on #Activate method). As a result, I got several jobs doing the same action at the scheduled time, so I had to unschedule them in #Deactivate method.
You may make an experiment and make sure that there is no duplicated jobs under /var/eventing/scheduled-jobs

how to operate in airflow so that the task rerun and continue downstream tasks

I set up a work flow in airflow, one of jobs was failed, after I fixed the problem, I want to rerun the failed task and continue the workflow.the details like below:
as above, I prepared to run the task "20_validation", I pressed the button 'Run' like below:
but the problem is when the task '20_validation' has finished, these downstream tasks were not continue to be started. How should I do?
Use the clear button directly below the run button you drew the box around in your screenshot.
This will clear the failed task state and all the tasks after (since downstream is selected to the right), causing them all to be run/rerun.
Since the upstream task failed, the downstream tasks shouldn't have been run anyway, so it shouldn't cause successful tasks to rerun.

Spring batch jobOperator - how are multiple concurrent instances of a job from the same XML file controlled?

When we run multiple concurrent jobs with different parameters, how can we control (stop, restart) the appropriate jobs? Our internal code provides the jobExecution object, but under the covers The jobOperator uses the job name to get the job instance.
In our case all of the jobs are from "do-stuff.xml" (okay, it's sanitized and not very original). After looking at the spring-batch source code, our concern is that if there is more then one job running and we stop a job it will take the most recently submitted job and stop it.
The JobOperator will allow you to fetch all running executions of the job using getRunningExecutions(String jobName). You should be able to iterate over that list to find the one you want. Then, just call stop(long executionId) on the one you want.
Alternatively, we've also implemented listeners (both at step and chunk level) to check an outage status table. When we want to implement a system-wide outage, we add the outage there and have our listener throw an exception to bring our jobs down. once the outage is lifted, all "failed" executions may be restarted.

Adaptive processes using rules

In jBPM, if we create a process with a rule task and then deploy the process. During process execution, before the rule task is executed, I have changed the business logic in the DRL file and saved it. But this change is not reflected in the currently running instance. Is this correct behavior for an adaptive process?
When you change the business logic in your DRL file, you should release new deploy version in order to mantain your process execution, so changes are reflected in new process instances and old process instances mantain the rules defined when it started