Cron Binding - Does it have a load balancing option? - kubernetes

I did some tests with the Dapr Cron Binding and noticed that each instance of my application is triggered by that binding.
However, I'm afraid this will cause multiple unnecessary actions as a result (eg multiple requests to a third application).
I would like to know if it is possible to load balace the Cron Binding calls, similarly to what happens with the service invocation.
I also take the opportunity to ask if this also applies to the other Input Bindings, that is, is a trigger sent to each instance or to a single one?

Related

CQRS and Passing Data

Suppose I have an aggregate containing some data and when it reaches a certain state, I'd like to take all that state and pass it to some outside service. For argument and simplicity's sake, lets just say it is an aggregate that has a list and when all items in that list are checked off, I'd like to send the entire state to some outside service. Now when I'm handling the command for checking off the last item in the list, I'll know that I'm at the end but it doesn't seem correct to send it to the outside system from the processing of the command. So given this scenario what is the recommended approach if the outside system requires all of the state of the aggregate. Should the outside system build its own copy of the data based on the aggregate events or is there some better approach?
Should the outside system build its own copy of the data based on the aggregate events.
Probably not -- it's almost never a good idea to share the responsibility of rehydrating an aggregate from its history. The service that owns the object should be responsible for rehydration.
First key idea to understand is when in the flow the call to the outside service should happen.
First, the domain model processes the command arguments, computing the update to the event history, including the ChecklistCompleted event.
The application takes that history, and saves it to the book of record
The transaction completes successfully.
At this point, the application knows that the operation was successful, but the caller doesn't. So the usual answer is to be thinking of an asynchronous operation that will do the rest of the work.
Possibility one: the application takes the history that it just saved, and uses that history to create schedule a task to rehydrate a read-only copy of the aggregate state, and then send that state to the external service.
Possibility two: you ditch the copy of the history that you have now, and fire off an asynchronous task that has enough information to load its own copy of the history from the book of record.
There are at least three ways that you might do this. First, you could have the command schedule the task as before.
Second, you could have a event handler listening for ChecklistCompleted events in the book of record, and have that handler schedule the task.
Third, you could read the ChecklistCompleted event from the book of record, and publish a representation of that event to a shared bus, and let the handler in the external service call you back for a copy of the state.
I was under the impression that one bounded context should not reach out to get state from another bounded context but rather keep local copies of the data it needed.
From my experience, the key idea is that the services shouldn't block each other -- or more specifically, a call to service B should not block when service A is unavailable. Responding to events is fundamentally non blocking; does it really matter that we respond to an asynchronously delivered event by making an asynchronous blocking call?
What this buys you, however, is independent evolution of the two services - A broadcasts an event, B reacts to the event by calling A and asking for a representation of the aggregate that B understands, A -- being backwards compatible -- delivers the requested representation.
Compare this with requiring a new release of B every time the rehydration logic in A changes.
Udi Dahan raised a challenging idea - the notion that each piece of data belongs to a singe technical authority. "Raw business data" should not be replicated between services.
A service is the technical authority for a specific business capability.
Any piece of data or rule must be owned by only one service.
So in Udi's approach, you'd start to investigate why B has any responsibility for data owned by A, and from there determine how to align that responsibility and the data into a single service. (Part of the trick: the physical view of a service can span process boundaries; in other words, a process may be composed from components that belong to more than one service).
Jeppe Cramon series on microservices is nicely sourced, and touches on many of the points above.
You should never externalise your state. Reporting on that state is a function of the read side, as it produces reports and you'll need that data to call the service. The structure of your state is plastic, and you shouldn't have an external service that relies up that structure otherwise you'll have to update both in lockstep which is a bad thing.
There is a blog that puts forward a strong argument that the process manager is the correct place to put this type of feature (calling an external service), because that's the appropriate place for orchestrating events.

In spring batch, what is a scope of an ItemReader without scope="..."?

If I have a web application, with an application-context that loads everything for my webapp and all my jobs configuration files, and if I have in a job a simple ItemReader without scope="step", the reader is a singleton, right ? So if I launch twice my job from a controller via a SimpleJobLauncher, I will use the same bean, right ? Unless I put scope="step", in order to have one bean per job execution ?
On the other hand, if I launch the job from a CommandLineJobRunner, I will have two distinct application contexts, so two different beans, right ?
Are my assertions valid ?
Thanks
Yes that is correct. Basically, every Bean-instance in a SpringContext is a singleton.
However, most readers or writers have a state. For instance, FlatFileItemReader can only run once, after that it points to the end of the file and its "close" method had been called. Therefore, if you simply start the job again, it will not work, since the FlatfileItemReader is closed.
For such cases, you will need to define them with sope=step.

Rules in jBPM 6

I have created a process in jbpm 6. There is a class Person, with attributes name and age. In the process form, the name and age of the person is entered. The first node in the process is a human task to view the details. The second node is an XOR gateway with drools expression on its arcs like Person(age > 20) and Person (age < 20).
Now when I execute the process instance, the first human tasks works fine, but when it reaches the gateway, I can see this error -
"XOR split could not find at least one valid outgoing connection for
split Gateway".
Any idea whats wrong.
Gateways containing drools expressions only work with facts and not with process variables. If you want to make use of a drools expression in your gateways, you will need to insert the process variable (or the whole process instance) as a fact. You can do so by using a script node, an outgoing action in your human task.
From documentation:
Rule constraints do not have direct access to variables defined inside the process. It is however possible to refer to the current process instance inside a rule constraint, by adding the process instance to the Working Memory and matching for the process instance in your rule constraint. ....... Note that you are however responsible yourself to insert the process instance into the session and, possibly, to update it, for example, using Java code or an on-entry or on-exit or explicit action in your process.
Hope it helps,

quartz-scheduler depend jobs

I'm working on a project with Quartz and has been a problem with the dependencies with jobs.
we have a setup where A and B aren't dependent on eachother, though C is:
A and B can run at the same time, but C can only run when both A and B are complete.
Is there a way to set this kind of scenario up in Quartz, so that C will only trigger when A and B finish?
Not directly AFAIK, but it should be not too hard to use a TriggerListener to implement such a functionality (a TriggerListener is run both a start and end of jobs, and you can set them up for individual triggers or trigger groups).
EDIT: there is even a specific FAQ Topic about this problem:
There currently is no "direct" or "free" way to chain triggers with
Quartz. However there are several ways you can accomplish it without
much effort. Below is an outline of a couple approaches:
One way is to use a listener (i.e. a TriggerListener, JobListener or
SchedulerListener) that can notice the completion of a job/trigger and
then immediately schedule a new trigger to fire. This approach can get
a bit involved, since you'll have to inform the listener which job
follows which - and you may need to worry about persistence of this
information. See the listener
org.quartz.listeners.JobChainingJobListener which ships with Quartz -
as it already has some of this functionality.
Another way is to build a Job that contains within its JobDataMap the
name of the next job to fire, and as the job completes (the last step
in its execute() method) have the job schedule the next job. Several
people are doing this and have had good luck. Most have made a base
(abstract) class that is a Job that knows how to get the job name and
group out of the JobDataMap using pre-defined keys (constants) and
contains code to schedule the identified job. This abstract Job's
implementation of execute() delegates to an abstract template method
such as "doWork()" (where the extending Job class's real work goes)
and then it contains the code for scheduling the follow-up job. Then
they simply make extensions of this class that included the work the
job should do. The usage of 'durable' jobs, or the overloaded
addJob(JobDetail, boolean, boolean) method (added in Quartz 2.2) helps
the application define all the jobs at once with their proper data,
without yet creating triggers to fire them (other than one trigger to
fire the first job in the chain).
In the future, Quartz will provide a much cleaner way to do this, but
until then, you'll have to use one of the above approaches, or think
of yet another that works better for you.

How do I listen for, load and run user-defined workflows at runtime that have been persisted using SqlWorkflowInstanceStore?

The result of SqlWorkflowInstanceStore.WaitForEvents does not tell me what type of workflow is runnable. The constructor of WorkflowApplication takes a workflow definition, and at a minimum, I need to be able to store a workflow ID in the store and query it, so that I can determine which workflow definition to load for the WorkflowApplication.
I also don't want to create a SqlWorkflowInstanceStore for each custom workflow type, since there may be thousands of different workflows.
I thought about trying to use WorkflowServiceHost, but not every workflow has a Receive activity and I don't think it is feasible to have thousands of WorkflowServiceHosts running, each supporting a different workflow type.
Ideally, I just want to query the database for a runnable workflow, determine its workflow definition ID, load the appropriate XAML from a workflow definition table, instantiate WorkflowApplication with the workflow definition, and call LoadRunnableInstance().
I would like to have a way to correlate which workflow is related to a given HasRunnableWorkflowEvent raised by the SqlWorkflowInstanceStore (along with the custom workflow definition ID), or have an alternate way of supporting potentially thousands of different custom workflow types created at runtime. I must also load balance the execution of workflows across multiple application servers.
There's a free product from Microsoft that does pretty much everything you say there, and then some. Oh, and it's excellent too.
Windows Server AppFabric. No, not Azure.
http://www.microsoft.com/windowsserver2008/en/us/app-main.aspx
-Oisin