I have created a State machine workflow in WF 4.0 and it has following steps.
State 1 : RunExe
State 2: LoadData
State 3: VerifyData
state 4: ExportData
State 5: Complete
Each of above state will have a recevie and send activity which contains the code activities to execute in between. Suppose, if i fails to verify data (step 3) i want to re-run Step 2 and do step 3 again.
Can anyone let me know how this can be possible?
Add a state transition back to state 2 and have that execute on a verification failure.
Related
I have a pipeline which has two concurrent list of activities running at the same time lets say the first list of activities has a flag at end which makes the until loop in the second list of activity to complete if flag is true. Now the problem is the flag is set at the end of the pipeline once all the activities in list one is completed. In case if any one of the activity prior to the set flag activity fails the set flag is never set to TRUE and the until loop runs for infinite time which is not ideal.
Is there a way that I can manually cancel the activity when any one of the activities fails?
Secondly the until loop has an inner wait activity so once it enters the loop it will wait for 50 minutes until it checks for the flag condition next time. The wait is important so I can't exclude the wait however I would want the until loop to end as soon as the flag is set to true even though the wait is running. Basically i'm trying to end the Until loop.
I did try the steps in MS Docs : Pipeline cancel run : https://learn.microsoft.com/en-us/rest/api/datafactory/pipelineruns/cancel
But this does not work because even when the RUN ID is correct it says incorrect RUN ID.
Could some one please advise how to achieve this?
I am running a pipeline where i am looping through all the tables in INFORMATION.SCHEMA.TABLES and copying it onto Azure Data lake store.My question is how do i run this pipeline for the failed tables only if any of the table fails to copy?
Best approach I’ve found is to code your process to:
0. Yes, root cause the failure and identify if it is something wrong with the pipeline or if it is a “feature” of your dependency you have to code around.
1. Be idempotent. If your process ensures a clean state as the very first step, similar to Command Design pattern’s undo (but more naive), then your process can re-execute.
* with #1, you can safely use “retry” in your pipeline activities, along with sufficient time between retries.
* this is an ADFv1 or v2 compatible approach
2. If ADFv2, then you have more options and can have more complex logic to handle errors:
* for the activity that is failing, wrap this in an until-success loop, and be sure to include a bound on execution.
* you can add more activities in the loop to handle failure and log, notify, or resolve known failure conditions due to externalities out of your control.
3. You can also use asynchronous communication to future process executions that save success to a central store. Then later executions “if” I already was successful then stop processing before the activity.
* this is powerful for more generalized pipelines, since you can choose where to begin
4. Last resort I know (and I would love to learn new ways to handle) is manual re-execution of failed activities.
Hope this helps,
J
Watches are one time triggers; if you get a watch event and you want
to get notified of future changes, you must set another watch.
Lets assume, there is a znode which's data is updated every 5-100 sec. I want to maintain a local copy of it's data. So my algorithm is:
call get(znode, set_watch=true) to set a watch
on znode's data changed:
a step with potential programmer's mistake
local_copy = get(znode, set_watch=true) to get local copy and set another watch
So a single exception on step 2.1 makes step 2.2 be skipped which means another watch is not set and all future updates will be lost.
What is the general way to write robust data-changed listener? My current dirty workaround is to set additional watches by timer.
Scenario A:
Step A - PENDING
Step B - PENDING
Scenario B:
Step C - Implemented
Step D - Implemented
When running the story, steps C and D are set as NOT PERFORMED. How do I get those to run even with scenario A failing due to pending steps?
I've tried setting a PendingStepStrategy to PassingUponPendingStep (and FailingUponPendingStep) but it doesn't make a difference.
JBehave can be configured to keep track of state in between Scenarios. I believe the reason for this is to account for when you want to have scenarios that relate to one another.
If you check what configuration your using, then you should be able to see if you have a certain parameter on the StoryControls set.
For example
Configuration configuration = new MostUsefulConfiguration()
.useStoryControls(new StoryControls().doResetStateBeforeScenario(false))
...
If you have the above setting, it will not perform the other scenarios as the failure state is retained
You can use JBehaves MostUsefulConfiguration class within your configuration without extra configuration, as the doResetStateBeforeScenario is set to true by default.
Those steps should run anyway. I think you might have an error in the line where you declare the scenario, and JBehave thinks those four steps belong to the same scenario.
The scenarios are separated by the token Scenario:, for example
Scenario: Use a pattern variant
When the item cost is 10.0
When the price is 10.0
When the cost is 10.0
Scenario: Use a aliases variant
Then the item price is 10.0
Then the item price becomes 10.0
Then the item price equals to 10.0
Even if any of the steps in the first scenario fails, the second scenario will run.
WF Tasks - T1 > T2 > T3
Steps
1 * Workflow Instance is started & bookmarked at first task T1
2 * Trying to Load the workflow Instance,Instance starts successfully and moves to next task,
but bookmark information is not getting updated to
[System.Activities.DurableInstancing].[InstancesTable]
it shows the old book marked information only
I tried tracing the workflow its coming to bookmark stage and sending bookmark information of next task T2 in the code activity of bookmark and context.CreateBookmark(bookmarkName,new BookmarkCallback(OnReadComplete)); is called but its not updating the the instance information with new bookmark..
The workflow persistence database is not updated until the workflow persist again. That is by design so you can restart from a known point if you application crashes. You can force persistence by adding Persist activities to your workflow.
Adding with "The problem solver" answer you can refer for persistance points.
http://msdn.microsoft.com/en-us/library/dd489420.aspx