I am writing a cloudformation template, where in I will have a lambda backed custom resource, which will trigger a step function execution.
This step function can take more than 15 minutes to execute, so I cannot wait for its completion in the invoking lambda. So the Lambda will return immediately after invoking the step function.
However I want to wait for step function to complete before proceeding with creation of another resource based on step function output.
How can this be achieved. The step function I am referencing is already created and not part of my current stack.
I know cloudformation supports wait Condition and wait condition handle, but that would require updating the step function to call wait condition handle , which is not possible, as step function is owned by a different team.
Is there any way this can be resolved?
Tricky thing. But you can try using CloudWatch event to listen for given event configuration:
EventPattern:
source:
- "aws.states"
detail-type:
- "Step Functions Execution Status Change"
Then, you can write your own Lambda function that trigger your wait condition. Or, if it's possible to exactly filter event you want to wait for, bind this event pattern target directly to SNS topic, so you won't need custom code for this if you can express your requirements in simple JSON filter.
Related
I have created a simple model in AnyLogic (see screenshot). Now I want to add a condition that selects one of the two resource sets in the service block. As an example the following scenario shall apply: If there are more than 5 parts in the queue, worker 3 and worker 4 should perform the service. If there are <= 5 parts in the queue, the service shall be performed by worker 1 and worker 2. This is only meant to be a simplified example. I am primarily interested in solving this problem using a condition. I have already tried different approaches, but without success. Does anyone have an idea how the Java code for this condition could look like?
First, you don't need the queue since the service block already has a queue... So For this particular example in your resource choice conditions you will do the following:
service.queueSize()>5 ? (worker3.containsUnit(unit) || worker4.containsUnit(unit))
:
(worker1.containsUnit(unit) || worker2.containsUnit(unit))
You can change service.queueSize() with queue.size() if you insist in using a queue. After that you need to be sure to recalculate the conditions when needed, for this particular example i think you only need to recalculate them on exit action of the service block:
self.recalculateResourceChoiceConditions();
One easy approach is to use Seize and Delay (and Release once done) blocks instead of Service. Before Seize, you can place your condition in a SelectOutputOut block. Like this:
I am interested in getting actual run start time for Tumbling Window trigger. I don't want Schedule Trigger. My scenario demands for Tumbling Window trigger specifically, but also some logic also requires knowing exactly at what time a triggered run started. As per the documentation I tried using #pipeline().TriggerTime , basically I passed it as a value to one of the pipeline parameters, but then it was not converted into a value -- then I realized the scope of this expression is within pipeline so I can't use it in a trigger. #trigger().outputs.windowStartTime can be used in a trigger but it doesn't serve my purpose -- I am not looking for a window start time , which is fixed no matter when a trigger is executed. I want actual run start time for Tumbling Window trigger. Is there any solution to this?
One solution I found is that we create Append Variable activity and call #pipeline().TriggerTime in the value section of the activity. Since this is part of the pipeline, it gets converted into a value there.
Another solution is to simply call utcnow() in the append variable activity.
I have a Dataflow pipeline reading from unbounded source. My window size is 10 hours, I am trying to test my trigger using a TestStream. My trigger will emit early result if element count reaches at least 2 for the same key within a Window. I have following trigger to achieve this:
input.apply(Window.into(FixedWindows.of(Duration.standardHours(12))) .triggering(AfterWatermark.pastEndOfWindow()
.withEarlyFirings(AfterPane.elementCountAtLeast(2)))
.apply(Count.perElement())
We also tried:
Repeatedly.forever(AfterPane.elementCountAtLeast(2)).orFinally(AfterWatermark.pastEndOfWindow())
I expect early firing when asserting the result, however I don't get all the result in
PAssert.that(pipeline).inWindow(..)..
What am I doing wrong? Also running same test repeatedly yields different result meaning different values are returned from the trigger.
Triggering is non-deterministic. It will give you an early firing some time after the trigger condition is satisfied. It will then give you another early firing some time after the trigger condition is satisfied again.
The actual choice to emit after the trigger is determined by the runner. If you are using a batch runner, it may wait until all the data is available. How much input are you expecting for each key/window? Which runner are you using?
I am trying to create a Quartz scheduler using Java which will be able to call an API and pass in data.
I am totally new to Quartz but now I understand the Job concept and how to create one. I understand the trigger concept and how to trigger one
and I understand how the scheduler works.
What I am having difficult with is how can I pass in the information which is required to be passed to the API. I have an example of an API being called and the data is entered into the DB but the information has been hard coded into the class be passed into the JobDetails.
Ie. the user passes a message to the system which needs to be sent to the user in 12 hours and not before, so what i was planning was create a Job and a trigger in to set the execute time to 12 hours. How to do i pass the message into the scheduler? Where should this message be stored? Is what I am trying to do possible? Have i misunderstood what Quartz is capable of doing?
Thank you for your time. Any assistance would be greatly appreciated.
Take a look at JobDataMap. If you are creating a new job for each user action you can store the message in there which will be available during the execution.
JobDataMap Holds state information for Job instances.
JobDataMap instances are stored once when the Job is added to a scheduler. They are also re-persisted after every execution of jobs annotated with #PersistJobDataAfterExecution.
JobDataMap instances can also be stored with a Trigger. This can be useful in the case where you have a Job that is stored in the scheduler for regular/repeated use by multiple Triggers, yet with each independent triggering, you want to supply the Job with different data inputs.
The JobExecutionContext passed to a Job at execution time also contains a convenience JobDataMap that is the result of merging the contents of the trigger's JobDataMap (if any) over the Job's JobDataMap (if any).
In case you have a single job but for each user action you are creating a new trigger, you can follow the solution given here.
Third option will be, for each user action, persist the message and time to send email to the database. Have a job that runs periodically and scans through database for eligible records for which email has to be sent
I saw the sample dynamic trigger in github and it is using fixed rate/delay but is it possible to implement dynamic trigger with cron expression where once job is completed with custom exit code we want the cron expression in such a way that it no longer poll for that day or change cron expressin to start polling from diff time onwards.
Unfortunately org.springframework.scheduling.support.CronTrigger uses final field, so we can't change its state at runtime. Therefore any ideas to seek the way how to change cron-expression value is a waste of time.
From other let's take a look at this as just a time producer solution to notify the scheduler when to start a provided task.
In other words here is a Trigger contract source code:
public interface Trigger {
Date nextExecutionTime(TriggerContext triggerContext);
}
So, what our solution must supply is just only returning the specific Date for each nextExecutionTime invocation.
Only what you need to do here is that dynamic trigger implementation which fits to your requirements.
Right, it might be a bit difficult to reach cron-similar behavior, but there is no choice for you right now...
Although you can stop() you adapter after the task, inject a new CronTrigger to it and start() it again.
You can write a custom trigger that simply wraps a CronTrigger and you can replace the delegate CronTrigger at will.
However, a limitation of the Trigger mechanism is you can't change an existing schedule.
If you are running your job on the poller thread, then you can change the trigger before the poller thread returns (and calls the trigger to find the next execution time).
Spring Integration 4.2 (currently at milestone 2) has conditional pollers which will make things like this a little easier.