How to get preceding Task name/id from the tast triggered on Task Failure - qliksense

I am new to QlikSense and I have a scenario like- I have two task A and B. I call task B if the task A failed. Now , my intension is to get the Task name or id of task A from task B. I have tried a lot in differect community but no luck yet.
There are multiple Task in level A. Task B can be triggered from any of the task in level A. As a result I need some tricks to find the Task name that actually triggered task B.
Thanks in advance.

Related

How to create a Teamcity build trigger that will run job B once per week, after job A finished, where A runs daily

I have some Teamcity jobs created.
One of those jobs, let's call it job A, has a schedule trigger, running daily at 7:00 am.
Now I have another one, job B, that I want to run once per week, but only after job A ran.
Given that job A takes about 30 seconds to run, I know I can create a schedule trigger for job B, that will run on every Monday, at 07:10 am.
I also know I can create a Finish Build Trigger, making sure that job B runs after job A ran, but it will run every day(because job A needs to run every day)
I'm trying to find a way to combine these, and come up with some sort of trigger that does something like this:
runs job B once per week(say Monday morning), after job A ran.
Could someone nudge me in the right direction? Or explain to me if/why what I'd like to do is a no-no. Thanks
It looks like the feature called Snapshot Dependency fits well into your scenario.
To tell it short, you can link the two jobs you have with the snapshot dependency. In your case, job B will "snapshot-depend" on job A. In my experience, it works best if both jobs use the same VCS root, that is, work with the same repository.
Job A is configured to run daily and job B is configured to run weekly (via regular scheduled triggers). When job A is triggered, it doesn't affect job B at all. On the other hand, when job B is triggered, it tries to find whether there's a suitable build of A by that time. If the two jobs work with the same repo, and the Enforce revision synchronization flag is ON, this means it will try to find the build of A of that same source code revision.
If there's a suitable build of A, it won't trigger a new one and will just build B. If there's no suitable build of A, it will first trigger A, and then trigger the build of B.

What exactly is a Cadence decision task?

Activity tasks are pretty easy to understand since it's executing an activity...but what is a decision task? Does the worker run through the workflow from beginning (using records of completed activities) until it hits the next "meaningful" thing it needs to do while making a "decision" on what needs to be done next?
My Opinions
Ideally users don't need to understand it!
However, decision/workflow Task a leaked technical details from Cadence/Temporal API.
Unfortunately, you won't be able to use Cadence/Temporal well if you don't fully understand it.
Fortunately, using iWF will keep you away from leakage. iWF provides a nice abstraction on top of Cadence/Temporal but keep the same power.
TL;DR
Decision is short for workflow decision.
A decision is a movement from one state to another in a workflow state machine. Essentially, your workflow code defines a state machine. This state machine must be a deterministic state machine for replay, so workflow code must be deterministic.
A decision task is a task for worker to execute workflow code to generate decision.
NOTE: in Temporal, decision is called "command", the workflow decision task is called "workflow task" which generates the "command"
Example
Let say we have this workflow code:
public string sampleWorkflowMethod(...){
var result = activityStubs.activityA(...)
if(result.startsWith("x"){
Workflow.sleep(...)
}else{
result = activityStubs.activityB(...)
}
return result
}
From Cadence/Temporal SDK's point of view, the code is a state machine.
Assuming we have an execution that the result of activityA is xyz, so that the execution will go to the sleep branch.
Then the workflow execution flow is like this graph.
Workflow code defines the state machine, and it's static.
Workflow execution will decide how to move from one state to another during the run time, based on the intput/result/and code logic
Decision is an abstraction in Cadence internal. During the workflow execution, when it change from one state to another, the decision is the result of that movement.
The abstraction is basically to define what needs to be done when execution moves from one state to another --- schedule activity, timer or childWF etc.
The decision needs to be deterministic --- with the same input/result, workflow code should make the same decision --- schedule activityA or B must be the same.
Timeline in the example
What happens during the above workflow execution:
Cadence service schedules the very first decision task, dispatched to a workflow worker
The worker execute the first decision task, and return the decision result of scheduling activityA to Cadence service. Then workflow stay there waiting.
As a result of scheduling activityA, an activity task is generated by Cadence service and the task is dispatched to an activity worker
The activity worker executes the activity and returns a result xyz to Cadence service.
As a result of receiving the activity result, Cadence service schedules the second decision task, and dispatch to a workflow worker.
The workflow worker execute the second decision task, and respond the decision result of scheduling a timer to Cadence service
On receiving the decision task respond, Cadence service schedules a timer
When the timer fires, Cadence service schedules the third decision task and dispatched to workflow worker again
The workflow worker execute the third decision task, and respond the result of completing the workflow execution successfully with result xyz.
Some more facts about decision
Workflow Decision is to orchestrate those other entities like activity/ChildWorkflow/Timer/etc.
Decision(workflow) task is to communicate with Cadence service, telling what is to do next. For example, start/cancel some activities, or complete/fail/continueAsNew a workflow.
There is always at most one outstanding(running/pending) decision task for each workflow execution. It's impossible to start one while another is started but not finished yet.
The nature of the decision task results in some non-determinism issue when writing Cadence workflow. For more details you can refer to the article.
On each decision task, Cadence Client SDK can start from very beginning to "replay" the code, for example, executing activityA. However, this replay mode won't generate the decision of scheduling activityA again. Because client knows that the activityA has been scheduled already.
However, a worker doesn't have to run the code from very beginning. Cadence SDK is smart enough to keep the states in memory, and wake up later to continue on previous states. This is called "Workflow Sticky Cache", because a workflow is sticky on a worker host for a period.
History events of the example:
1. WorkflowStarted
2. DecisionTaskScheduled
3. DecisionTaskStarted
4. DecisionTaskCompleted
5. ActivityTaskScheduled <this schedules activityA>
6. ActivityTaskStarted
7. ActivityTaskCompleted <this records the results of activityA>
8. DecisionTaskScheduled
9. DecisionTaskStarted
10. DecisionTaskCompleted
11. TimerStarted < this schedules the timer>
12. TimerFired
13. DecisionTaskScheduled
14. DecisionTaskStarted
15. DecisionTaskCompleted
16. WorkflowCompleted
TLDR; When a new external event is received a workflow task is responsible for determining which next commands to execute.
Temporal/Cadence workflows are executed by an external worker. So the only way to learn about which next steps a workflow has to take is to ask it every time new information is available. The only way to dispatch such a request to a worker is to put into a workflow task into a task queue. The workflow worker picks it up, gets workflow out of its cache, and applies new events to it. After the new events are applied the workflow executes producing a new set of commands. After the workflow code is blocked and cannot make any forward progress the workflow task is reported as completed back to the service. The list of commands to execute is included in the completion request.
Does the worker run through the workflow from beginning (using records of completed activities) until it hits the next "meaningful" thing it needs to do while making a "decision" on what needs to be done next?
This depends if a worker has the workflow object in its LRU cache. If workflow is in the cache, no recovery is needed and only new events are included in the workflow task. If object is not cached then the whole event history is shipped and the worker has to execute the workflow code from the beginning to get it to its current state. All commands produced while replaying past events are duplicates of previously produced commands and are ignored.
The above means that during a lifetime of a workflow multiple workflow tasks have to be executed. For example for a workflow that calls two activities in a sequence:
a();
b();
The tasks will be executed for every state transition:
-> workflow task at the beginning: command is ScheduleActivity "a"
a();
-> workflow task when "a" completes: command is ScheduleActivity "b"
b();
-> workflow task when "b" completes: command is CompleteWorkflowExecution
In the answer, I used terminology adopted by temporal.io fork of Cadence. Here is how the Cadence concepts map to the Temporal ones:
decision task -> workflow task
decision -> command, but it can also mean workflow task in some contexts
task list -> task queue

Combine multiple queued Azure DevOps build pipeline jobs into one run

I have a custom Agent Pool with multiple Agents, each with the same capabilities. This Agent Pool is used to run many YAML build pipeline jobs called them A1, A2, A3, etc. Each of those A* jobs triggers a different YAML build pipeline job called B. In this scheme, multiple simultaneous completions of A* jobs will trigger multiple simultaneous B jobs. However, the B job is setup to self-interlock, so that only one instance can run at a time. The nice thing is that when B job runs, it consumes all of the existing A* outputs (for safety reasons, A* and B are also interlocked).
Unfortunately, this means that of the multiple simultaneous B jobs, most will be stuck waiting for the first to finish after it processed all of the outputs of complete A* jobs, and only then the rest of the queued and/or running but blocked on interlock instances of B job can continue one at a time, with each having nothing to consume because all of the A* outputs have already been processed.
Is there a watch to make Azure DevOps batch together multiple instances of job B together? In other words, if there is already one B job instance running or queued, don't add another one?
Is there a watch to make Azure DevOps batch together multiple instances of job B together? In other words, if there is already one B job instance running or queued, don't add another one?
Sorry for any inconvenience.
This behavior is by designed. AFAIK, there is no such way/feature to combine multiple queued build pipeline to one.
Besides, personally think that your request is reasonable. You could add your request for this feature on our UserVoice site (https://developercommunity.visualstudio.com/content/idea/post.html?space=21 ), which is our main forum for product suggestions. Thank you for helping us build a better Azure DevOps.
Hope this helps.

Setting up a Job Schedule

I currently have a setup that creates a job and then collect some metrics about the tasks in the job. I want to do something similar, but by setting a job schedule instead. In particular, I want to set a job schedule that wakes up at a recurrence interval that I specify, and run the same code that I was running when creating a job. What's the best way to go about doing that?
It seems that there is a CloudJobSchedule that I could use to set up my job schedule, but this only lets me create say a job manager task, and specify few properties. How can I run external code on the jobs created by the Job schedule?
It could also help to clarify how the CloudJobSchedule works. Specifically, after I commit my job schedule, what would happen programmatically. Does the code just move sequentially and run the rest of the code. In this case, does it make sense to get a reference to the last job created by the job schedule and run code on the job returned?
You'll want to create a CloudJobSchedule. You can specify the recurrence in the Schedule.
If you only need to run a single task per recurrence, your job manager task can simply be the task you need to run. If you need to run multiple tasks per job recurrence, your job manager needs to have logic to submit tasks to Batch and monitor for completion (if necessary).
When you submit a job schedule to Batch, your client side code will continue running. The behavior is no different than if you were submitting a regular job. You can retrieve the last job run via JobScheduleExecutionInformation and the RecentJob property.

How to find the data flow without the timing information of task instances

I have some tasks, which are further divided into runnables. Runnables execute as task instances. Runnables have dependencies within the tasks and also to other tasks's runnables. I have the information of deadlines and periods of tasks and the execution order of tasks and runnables i.e I can extract the data flow but the only point where I am stucking is that how can I get the information if the task instances are executing within the period i.e obeying the deadlines and if not executing withtin the deadline then that task instance will execute in the next cycle or next period.
Any ideas ? Suggestions ?
p.s I dont have timing information for the execution of runnables.