Currently in our workflow application if it encounters an unhandled exception it will reload the workflow from the most recently persisted state and try again. Are there any ways to configure how this works exactly? If a service is down for example the workflow will reload around every second and try to run again which when there are multiple workflows all doing the same thing can result in thousands of exceptions per minute.
I think that using the timeToPersist and timeToUnload properties on workflowIdle might have something to do with this. Currently we have this set to:
If I set timeToUnload to 1 minute will that mean the workflow will only be able to retry once every minute?
TimeToPersist and TimeToUnload won't come into play here- those values determine how long a workflow has to be idle before being persisted/unloaded.
You can probably use WorkflowApplication.OnUnhandledException to create a catch-all exception handler (assuming you're using this class to create workflows).
http://msdn.microsoft.com/en-us/library/system.activities.workflowapplication.onunhandledexception.aspx
Related
I'm working on a PostgreSQL 9.3-database on an Ubuntu 14 server.
I try to write a trigger-function (AFTER EACH ROW) that launches an external process that needs to access the row that fired that trigger.
My problem:
Even tough I can run queries on the table including the new row inside the trigger, the external process does not see it (while the trigger function is still running).
Is there a way to manage that?
I thought about starting some kind of asynchronous function call to give the trigger some time to terminate first, but that's of course really ugly.
Also I read about notifiers and listeners, but that would require some refactoring of my existing code and also some additional listener, which I tried to prevent with my trigger. (I'm also afraid of new problems which may occur on this road.)
Any more thoughts?
Robin
My company has a couple of joblets that we put in new jobs to do things like initialization of variables, get system information from the database and sending out error / warning emails. The issue we are running into is that if we go ahead and start creating the components of a job and realize that we forgot to include these 3 joblets, we have to basically re-create the job to ensure that the joblets are added first so they run first.
Is there any way to force these joblets to run first and possibly also in a certain order before moving on to the contents of the job being created? Please let me know if there is any information you may need that I'm missing as I have only been using Talend for a few days. The rest of the team has not been using it too much longer than I have, so they do not have the answer I'm looking for either. Thanks in advance!
In Joblets you can use the components Trigger_Input and Trigger_Output as connection-points for on subjob OK triggers. So you can connect joblets and other components in a job with triggers. Thus enforcing execution order.
But you cannot get a on subjob OK trigger from a tPreJob. I am thinking on triggering from a tPreJob to a tWarn (on component OK) and then from tWarn to the joblet (on subjob OK).
Just for clarification: Where do the execution of a command goes, when the execution is not simply a state update (like in most examples found online)
For instance, in my case,
The Command is FetchLastHistoryChangeSet which consist in fetching the last history changeset from an external service based on where we left off last time. In other words the time of the newest change of the previous history ChangeSet Fetched.
The Event would be HistoryChangeSetFetched(changeSet, time). In correlation to what has been said above, the time should be that of the newest change of the newly history ChangeSet Fetched (as per the command event currently being handled)
Now in all example that i see, it is always: (i) validating the command, then, (ii) persisting the event, and finally (iii) handling the event.
It is in handling the event that i have seen custom code added in addition to the updatestate logic. Where, the custom code is usually added after the update state function. But this custom is most of the time about sending message back to the sender, or broadcasting it to the event bus.
As per my example, it is clear that i need to do quite few operation to actually call Persist(HistoryChangeSetFetched(changeSet, time)). Indeed i need the new changeset, and the time of the newest change of it.
The only way i see it possible is to do the fetch in the validating the command
That is:
case FetchLastHistoryChangeSet => val changetuple = if ValidateCommand(FetchLastHistoryChangeSet) persit(HistoryChangeSetFetched(changetuple._1, changetuple._2)) { historyChangeSetFetched =>
updateState(historyChangeSetFetched)
}
Where the ValidateCommand(FetchLastHistoryChangeSet)
would have as logic, to read last changeSet time (newest change of the changeSet), fetch a new changeset based on it, if it exist, get the time of its newest change, and return the tuple.
My question is, is that how it is supposed to work. Validating command
can be something as complex as that ? i.e. actually executing the
command ?
As it says in the documentation: "validation can mean anything, from simple inspection of a command message's fields up to a conversation with several external services"
So I think what you're trying to do is exactly right. Any interaction with an external service must be done at the command validation stage.
I want to run a plug-in every 30 minutes, to poll an external system for changes. I am in CRM Online, so I don't have ready access to a scheduling engine.
To run the plug-in, I have a 'trigger' entity with a timezone independent date-
Updating the field also triggers a workflow, which in pseudocode has this logic:
If (Trigger_WaitUntil >= [Process-Execution Time])
{
Timeout until Trigger:WaitUntil
{
Set Trigger_WaitUntil to [Process-Execution Time] + 30 minutes
Stop Workflow with status of: Succeeded
}
}
If Trigger_WaitUntil < [Process-Execution Time])
{
Send email //Tell an admin that the recurring task has self-terminated
Stop Workflow with status of: Canceled
}
So, the behaviour I expect is that every 30 minutes, the 'WaitUntil' field gets updated (and the Plug-in and workflow get triggered again); unless the WaitUntil date is before the Execution time, in which case stop the workflow.
However, 4 hours or so later (probably 8 executions, although I haven't verified that yet) I get an infinite loop warning "This workflow job was canceled because the workflow that started it included an infinite loop. Correct the workflow logic and try again. For information about workflow".
My question is why? Do workflows have a correlation id like plug-ins, which is being carried through to the child workflow? If so, is there anyway I can prevent this, whilst maintaining the current basic mechanism of using a single trigger record to manage the schedule (I've seen other solutions in which workflows create new records, but then you've got to go round tidying up the old trigger records as well)
Yes, this behavior is well-known. The only way to implement recurring workflows without issues with infinite loops in Dynamics CRM and using only OOB features is usage of Bulk Deletion functionality. This article describes how to implement it - http://www.crmsoftwareblog.com/2012/08/using-the-bulk-deletion-process-to-schedule-recurring-workflows/
UPD: If you want to run your code every 30 mins then you will have to create 48 bulkdelete jobs with correspond startdatetime like 12:00, 12: 30, 1:00 ...
The current supported method for CRM is to use the Azure Scheduler.
Excerpt:
create a Web API application to communicate with CRM and our external
provider running on a shared (free) Azure web site and also utilize
the Azure Scheduler to manage the recurrence pattern.
The free version of the Azure Scheduler limits us to execution no more
than once an hour and a maximum of 5 jobs. If you have a lot going on
$20 a month will get you executions every minute and up to 50 jobs -
which sounds like a pretty good deal.
so if you wanted every 30 minutes, you could create two jobs, one on the half hour, and one on the hour.
The Bulk Deletion is an interesting work around and something we've used before. It creates extra work and maintenance though so I try to avoid it if possible.
I would generally recommend building a windows application and using the windows scheduling feature (I know you said you don't have a scheduler available but this is often forgotten). This approach works really well and is very easy to troubleshoot. Writing to logs and sending error email alerts is pretty easy to make it robust. The server doesn't need to be accessible externally, it only needs to reach CRM. If you had CRM on-prem, you could just use the same server.
Azure Scheduler is a great suggestion. This keeps you in the cloud which is nice.
SSIS is another option if you already have KingswaySoft or Cozy Roc in place.
You could build a workflow that creates another record and cleans up after itself; however, this is really using the wrong tool for the job. Also, it's very easy for it to fail and then not initiate the next record.
There is a solution called "Scheduled Workflow Runner". You create a FetchXML query to create a record set to run against, and point it at an on-demand workflow that you want it to run on each record.
http://alexanderdevelopment.net/post/2013/05/18/scheduling-recurring-dynamics-crm-workflows-with-fetchxml/
I'm currently using WF with multiple hosts. If one of these hosts owns a workflow, but crashes, I'd like another host to be able to terminate the workflow. Is there any way to do this?
What I've tried so far is to first remove ownership by executing a sql query to set ownerID and ownedUntil to NULL, unlocked to 1, and nextTimer to the current date. Then I get the workflow instance from the runtime and call terminate on it. This only seems to work when the host that starts the workflow is the one that terminates it.
I found a workaround. I call Terminate twice on the workflow instance. I still don't understand why that is needed, but it seems to work.