Adaptive processes using rules - drools

In jBPM, if we create a process with a rule task and then deploy the process. During process execution, before the rule task is executed, I have changed the business logic in the DRL file and saved it. But this change is not reflected in the currently running instance. Is this correct behavior for an adaptive process?

When you change the business logic in your DRL file, you should release new deploy version in order to mantain your process execution, so changes are reflected in new process instances and old process instances mantain the rules defined when it started

Related

Terminate all ECS tasks before new deployment

If an ECS task runs in multiple instances, the rolling deployment methods usually terminate tasks one by one and replace it with a newer version (newer task definition), in order to maximize uptime.
But we have some containers that may make schema upgrades on first-run (as an example), and here we never want old versions of a container to run at the same time as newer versions.
Another example is coupled data formats, that a new front-end talks to a new API. We'd rather have a moment of downtime than have old front-ends talk to new APIs, or new front-ends talk to old APIs.
ECS doesn't seem to provide a deployment strategy that terminates all instances of a task before starting to roll out new ones.
Before I roll some bash script that reduces task counts to 0 prior to an update and returns it to its original value, does anyone know if AWS provides a way to cleanly drain a service of tasks before rolling out a new task definition?
In a similar scenario I would do it as follows.
I would add my pipeline (if using a stack code) a CodeDeploy step and set the traffic to "All-at-once", this doc can start to direct you something.
In case I was unable to work with the stack code, I would create a lambda, which would be triggered by the pipeline, a step before the deploy, this lambda could interact with the ECS, resetting the tasks to receive the new version of the application, but in this situation, a downtime will be felt, of at least 2 ~ 5 minutes, depending on how your application is optimized. You can see a little more about the lib that will allow you to do this here.

What is the current recommended approach to manage/stop a spring-batch job?

We have some spring-batch jobs are triggered by autosys with shell scripts as short lived processes.
Right now there's no way to view what is going on in the spring-batch process so I was exploring ways to view the status & manage(stop) the jobs.
Spring Cloud Data Flow is one of the options that I was exploring - but it seems that may not work when jobs are scheduled with Autosys.
What are the other options that I can explore in this regard and what is the recommended approach to manage spring-batch jobs now?
To stop a job, you first need to get the ID of the job execution to stop. This can be done using the JobExplorer API that allows you to explore meta-data that Spring Batch is aware of in the job repository. Once you get the job execution ID, you can stop it by calling the JobOperator#stop method, please refer to the Stopping a job section of the reference documentation.
This is independent of any method you used to launch the job (either manually, or via a scheduler or a graphical tool) and allows you to gracefully stop a job and leave the repository in a consistent state (ready for a restart if needed).

PowerShell session/environment isolation - Jobs sharing same context?

I'm testing a workflow runbook that utilizes Add-Type to add some custom C# code.
All of a sudden I started getting 'type already exists' errors on subsequent test jobs, as if a new PSSession is not being created.
In other words, it looks like new jobs are sharing the same execution context. I only get this locally if I try to run the same command twice per PS instance.
The type in question is a static class with some Extension methods. Since it also happens to be the first type declared in the source block, I don't doubt other non-static types would throw errors as well.
I've executed this handfuls of times already, so I fully expect that 'eventually' this will stop happening, but I can't seem to force it, and I have no idea what I could've done to trip it into this situation, either.
Seeing evidence of shared execution contexts across jobs like this - even (especially?) if only temporal - makes me wonder if some or all of the general execution inconsistencies we've seen in the past when making & deploying changes & performing subsequent tests soon-after are related to this.
I'm tempted to think that this is simply a part of the difference between a Test Job and a 'real' one, but that raises questions about the validity of the Test jobs themselves WRT mimicking Published Jobs.
Are all Azure Automation Jobs supposed to execute in Isolation? Can this be controlled/exploited by a developer?
Each automation account has its own isolated sandboxes where its jobs run. Those sandboxes are distributed among a number of worker machines. For test jobs, to try to improve job start time since [make code change, retest] over and over is very common, Automation reuses the same sandbox as used for previous test jobs of this runbook, if the sandbox has not been cleaned up yet, so that sandboxes do not have to be spun up for each unique test job (sandbox creation is one reason for a longer job start time than desired). Due to this behavior, if you execute test jobs of the same runbook within a short amount of time, you will get the behavior you're seeing above.
However, even for production jobs, jobs of the same automation account (across runbooks) can share the same sandboxes. We randomly distribute jobs across our worker machines, so its possible job A is queued for execution and is placed on worker W, then 5 minutes later, job B is queued for execution and is placed on worker W as well. If job A and job B are of the same automation account and have the same "demands" in terms of modules / module versions, they will be placed in the same sandbox, if job A's sandbox is still around. "Module / module version demands" does not mean the modules used by the runbook, but the modules / latest module versions that existed in the automation account at the time when the job was started / runbook was scheduled (for jobs started via schedule) / runbook was assigned to a webhook (for jobs started via webhook)
In terms of resolving your specific problem, you could surround Add-Type with a try, catch statement, or maybe use Add-Type -IgnoreWarnings

Automatically handling job and trigger changes in Quartz.net using AdoJobStore

I am writing a Quartz.net application using AdoJobStore to allow automated report scheduling.
In my scenario, users will define custom reports to be scheduled in one application which will add the required jobs and triggers to the database (using the AdoJobStore routines).
A separate Quartz.net application then reads these settings from the database (also using the AdoJobStore routines) and emails the reports as necessary.
Is there a way to get the quartz scheduler to automatically start scheduling new jobs and triggers that have been added to the database after the scheduler last started, or will I need to write a routine that periodically checks for database changes, and if found restart the Quartz scheduler instance?
You can handle all of this directly with Quartz.Net. Here's one way to do it:
Set up a Quartz.Net server as a windows service. The distribution comes with a Windows Service implementation, or you can build your own. Enable remoting on the quartz server.
From the application where users will configure their reports and schedules, connect to the Quartz.Net server using the Quartz.Net library and directly schedule the jobs and triggers as necessary.
You'll probably want to store the user's report configuration elsewhere in case the user wants to look at it later or change/copy it. Store this data somewhere else other than Quartz.Net. If the user changes the stored report configuration, connect again to the Quartz.Net server and update/reschedule the jobs using the Quartz.Net library. Alternatively, you could create a job that runs on the Quartz.Net server and periodically checks whether there have been any report configuration changes.
You'll have to create the actual jobs that will generate your reports in a generic enough fashion so that any report can be built by passing in data to job via the JobDataMap, instead of having to create a job for each report.

how to load/boostrap ongoing jobs with a quartz jdbcStore

I am working to migrate from Quartz 1.6 to 2.1 and use a JDBCJobStore. Previously, the the jobs were loaded via an xml file when the webapp started. The scheduler is now running using the JDBCJobStore but I don't understand how to add the jobs to the database which need to run on an ongoing basis (not one-off jobs).
My first thought is to create a servlet which runs on startup which adds the jobs to the database. But my concern is that this will be executed every time I need to restart the app and the jobs will get duplicated.
Thanks,
steve
The Jobs wont disappear from the database when you do a restart. So within your servlet, when it starts up before adding any jobs check to see if they already exist. When you create your jobs you can give them identities. Using the identities and some quartz methods you check if they already exist.
It sounds like the memory based scheduler is a better fit for these fixed jobs. You can create more than one scheduler, one memory, one JDBC if that makes sense for your application.