I am using JBPM 7.58 version or workflow management. There is one task in my database which is stuck in Createdstate. As per the JBPM documentation here (little earlier version)
A newly created task starts in the "Created" stage. Usually, it will
then automatically become "Ready", after which the task will show up
on the task list of all the actors that are allowed to execute the
task.
The documentation has mentioned it vague using the word "Usually" . Now the question is what could be the reason that my task is stuck in Created state and what can I do about it?
Related
If an ECS task runs in multiple instances, the rolling deployment methods usually terminate tasks one by one and replace it with a newer version (newer task definition), in order to maximize uptime.
But we have some containers that may make schema upgrades on first-run (as an example), and here we never want old versions of a container to run at the same time as newer versions.
Another example is coupled data formats, that a new front-end talks to a new API. We'd rather have a moment of downtime than have old front-ends talk to new APIs, or new front-ends talk to old APIs.
ECS doesn't seem to provide a deployment strategy that terminates all instances of a task before starting to roll out new ones.
Before I roll some bash script that reduces task counts to 0 prior to an update and returns it to its original value, does anyone know if AWS provides a way to cleanly drain a service of tasks before rolling out a new task definition?
In a similar scenario I would do it as follows.
I would add my pipeline (if using a stack code) a CodeDeploy step and set the traffic to "All-at-once", this doc can start to direct you something.
In case I was unable to work with the stack code, I would create a lambda, which would be triggered by the pipeline, a step before the deploy, this lambda could interact with the ECS, resetting the tasks to receive the new version of the application, but in this situation, a downtime will be felt, of at least 2 ~ 5 minutes, depending on how your application is optimized. You can see a little more about the lib that will allow you to do this here.
Currently we are using Github Actions for CI for infrastructure.
Infrastructure is using terraform and a code change on a module triggers plan and deploy for changed module only (hence only updates related modules, e.g 1 pod container)
Since auto-update can be triggered by another github repository push they can come relatively on same time frame, e.g Pod A Image is updated and Pod B Image is updated.
Without any concurrency in place, since terraform holds lock, one of the actions will fail due to lock timeout.
After implementing concurreny it is ok for just 2 on same time pushes to deploy as second one can wait for first one to finish.
Yet if there are more coming, Githubs concurreny only takes into account last push for queue and cancels waiting ones (in progress one can still continue). This is logical from single app domain perspective but since our Infra code is using difference checks, by passing deployments on canceled job actually bypasses and deployment!.
Is there a mechanism where we can queue workflows (or even maybe give a queue wait timeout) on Github Actions ?
Eventually we wrote our own script in workflow to wait for previous runs
Get information on current run
Collect previous non completed runs
and wait until completed (in a loop)
If exited waiting loop continue
on workflow
Tutorial on checking status of workflow jobs
https://www.softwaretester.blog/detecting-github-workflow-job-run-status-changes
We have some spring-batch jobs are triggered by autosys with shell scripts as short lived processes.
Right now there's no way to view what is going on in the spring-batch process so I was exploring ways to view the status & manage(stop) the jobs.
Spring Cloud Data Flow is one of the options that I was exploring - but it seems that may not work when jobs are scheduled with Autosys.
What are the other options that I can explore in this regard and what is the recommended approach to manage spring-batch jobs now?
To stop a job, you first need to get the ID of the job execution to stop. This can be done using the JobExplorer API that allows you to explore meta-data that Spring Batch is aware of in the job repository. Once you get the job execution ID, you can stop it by calling the JobOperator#stop method, please refer to the Stopping a job section of the reference documentation.
This is independent of any method you used to launch the job (either manually, or via a scheduler or a graphical tool) and allows you to gracefully stop a job and leave the repository in a consistent state (ready for a restart if needed).
In jBPM, if we create a process with a rule task and then deploy the process. During process execution, before the rule task is executed, I have changed the business logic in the DRL file and saved it. But this change is not reflected in the currently running instance. Is this correct behavior for an adaptive process?
When you change the business logic in your DRL file, you should release new deploy version in order to mantain your process execution, so changes are reflected in new process instances and old process instances mantain the rules defined when it started
I am working to migrate from Quartz 1.6 to 2.1 and use a JDBCJobStore. Previously, the the jobs were loaded via an xml file when the webapp started. The scheduler is now running using the JDBCJobStore but I don't understand how to add the jobs to the database which need to run on an ongoing basis (not one-off jobs).
My first thought is to create a servlet which runs on startup which adds the jobs to the database. But my concern is that this will be executed every time I need to restart the app and the jobs will get duplicated.
Thanks,
steve
The Jobs wont disappear from the database when you do a restart. So within your servlet, when it starts up before adding any jobs check to see if they already exist. When you create your jobs you can give them identities. Using the identities and some quartz methods you check if they already exist.
It sounds like the memory based scheduler is a better fit for these fixed jobs. You can create more than one scheduler, one memory, one JDBC if that makes sense for your application.