Windows Workflow not terminating after Transaction Failure - workflow

I am bit new to Windows Workflow foundation so it might be a very straight forward, but I am stuck with it. I've a very simple sequential workflow and there are couple of code activities that are inside a Transaction Scope Activity.
I am running my workflow from Console application having following code:
Activity workflow = new Process();
var inputArgument = new Dictionary<string, object>();
inputArgument["Argument 1"] = 1234567;
inputArgument["Argument 2"] = 1234567;
inputArgument["Argument 3"] = "GUID";
inputArgument["Aggument 4"] = #"\\filepath\";
var syncEvent = new AutoResetEvent(false);
var workflowApp = new WorkflowApplication(workflow, inputArgument);
workflowApp.OnUnhandledException =
delegate (WorkflowApplicationUnhandledExceptionEventArgs e)
{
return UnhandledExceptionAction.Terminate;
};
workflowApp.Completed +=
delegate (WorkflowApplicationCompletedEventArgs e)
{
syncEvent.Set();
};
workflowApp.Run();
syncEvent.WaitOne();
If I don't add Transaction Scope activity my workflow runs fine and in case of exception the workflow instance terminates and my console application close as well.
However, when I add Transaction Scope activity and if any activity fails inside Transaction Scope then my workflow instance keep running as well as my console. Can any one guide me how to terminate the instance?
I am not handling any exception within my workflow and want it to be like that so that I can log the exception details.

If you go to properties on the TransactionScope in the Workflow there is a property that is set to true by default called AbortInstanceOnTransactionFailure. Set that to false. It should then behave as you're expecting.
When this is enabled it will cause the workflow instance to abort but not terminate.

Related

How to register a VS Code task which invokes a function

I am developing a new extension which uses tasks. I need to create a task which will call a function, rather than starting a new process or shell.
I can create a new task which can execute a shell command.
let task = new vscode.Task(kind, taskName, taskSource, new vscode.ShellExecution(`echo Hello World`));
I would like to make a task which will call another method. Is there a way to do this?
There happens to be a "proposed API" for this exact purpose:
"custom execution" section in the March 2019 release notes with a code example:
let execution = new vscode.CustomExecution((terminalRenderer, cancellationToken, args): Thenable<number> => {
return new Promise<number>(resolve => {
// This is the custom task callback!
resolve(0);
});
});
const taskName = "First custom task";
let task = new vscode.Task2(kind, vscode.TaskScope.Workspace, taskName, taskType,
execution);
original issue: Allow extension to provide callback functions as tasks (#66818)
initial implementation pull request
relevant section in vscode.proposed.d.ts

Parallel.Foreach and BulkCopy

I have a C# library which connects to 59 servers of the same database structure and imports data to my local db to the same table. At this moment I am retrieving data server by server in foreach loop:
foreach (var systemDto in systems)
{
var sourceConnectionString = _systemService.GetConnectionStringAsync(systemDto.Ip).Result;
var dbConnectionFactory = new DbConnectionFactory(sourceConnectionString,
"System.Data.SqlClient");
var dbContext = new DbContext(dbConnectionFactory);
var storageRepository = new StorageRepository(dbContext);
var usedStorage = storageRepository.GetUsedStorageForCurrentMonth();
var dtUsedStorage = new DataTable();
dtUsedStorage.Load(usedStorage);
var dcIp = new DataColumn("IP", typeof(string)) {DefaultValue = systemDto.Ip};
var dcBatchDateTime = new DataColumn("BatchDateTime", typeof(string))
{
DefaultValue = batchDateTime
};
dtUsedStorage.Columns.Add(dcIp);
dtUsedStorage.Columns.Add(dcBatchDateTime);
using (var blkCopy = new SqlBulkCopy(destinationConnectionString))
{
blkCopy.DestinationTableName = "dbo.tbl";
blkCopy.WriteToServer(dtUsedStorage);
}
}
Because there are many systems to retrieve data, I wonder if it is possible to use Pararel.Foreach loop? Will BulkCopy lock the table during WriteToServer and next WriteToServer will wait until previous will complete?
-- EDIT 1
I've changed Foreach to Parallel.Foreach but I face one problem. Inside this loop I have async method: _systemService.GetConnectionStringAsync(systemDto.Ip)
and this line returns error:
System.NotSupportedException: A second operation started on this
context before a previous asynchronous operation completed. Use
'await' to ensure that any asynchronous operations have completed
before calling another method on this context. Any instance members
are not guaranteed to be thread safe.
Any ideas how can I handle this?
In general, it will get blocked and will wait until the previous operation complete.
There are some factors that may affect if SqlBulkCopy can be run in parallel or not.
I remember when adding the Parallel feature to my .NET Bulk Operations, I had hard time to make it work correctly in parallel but that worked well when the table has no index (which is likely never the case)
Even when worked, the performance gain was not a lot faster.
Perhaps you will find more information here: MSDN - Importing Data in Parallel with Table Level Locking

Celery send_task and retry on exception

I want to retry (official doc) a task when it raises an exception. Celery allows this by using the retry in form of self.retry(...)
Now, i can't figure out how to user self since i've a function without any class.
My code is this
.. imports ...
app = Celery('elasticcelery')
#app.task(name='rm_doc')
def rm_doc(schema_id, id):
es = Elasticsearch(es_ip)
try:
res = es.delete(schema_id, 'doc', id)
except NotFoundError as e:
<here goes the retry>
and it's called from another service in this way:
app_celery = Celery('celeryelastic')
app_celery.config_from_object('django.conf:settings')
app_celery.send_task('rm_doc', kwargs={"schema_id": schema_id, "id": document_id}, )
now, I should add the self.retry but there's no self in my method.
How should I proceed?
PS: I tried adding self as parmeter, but this fails since there's no mapping when the task is called the first time from the remote.
I forgot bind=True in the annotation of the method, now I can add self.

Rollback transactions made by two different DBContext saves when an exception occurs

I need to save to two different databases after some user action. Currently, I have the following:
using (EFEntities1 dc = new EFEntities1())
{
dc.USERS.Add(user);
dc.SaveChanges();
}
using (EFEntities2 dc = new EFEntities2())
{
dc.USERS.Add(user);
dc.SaveChanges();
}
These are two separate code blocks within the same method, so I believe if the second one fails, the first one won't rollback. How do I make sure both transactions rollback if something fails?
You can wrap them in a TransactionScope. Note that this will probably call the DTC.
using (TransactionScope scope = new TransactionScope())
{
using (EFEntities1 dc = new EFEntities1())
{
dc.USERS.Add(user);
dc.SaveChanges();
}
using (EFEntities2 dc = new EFEntities2())
{
dc.USERS.Add(user);
dc.SaveChanges();
}
scope.complete();
}

Error loading a persisted workflow

I have a workflow started and persisted using messaging activities.
The correlation between the Start initial command and the Stop final command works well if they're sent within few seconds.
Problems begin when the workflow is unloaded, because the following Stop message throws the following FaultException:
If LoadWorkflowByInstanceKeyCommand.AssociateLookupKeyToInstanceId is not specified, the LookupInstanceKey must already be associated to an instance, or the LoadWorkflowByInstanceKeyCommand will fail. For this reason, it is invalid to also specify the LookupInstanceKey in the InstanceKeysToAssociate collection if AssociateLookupKeyToInstanceId isn't set
Can anybody help me?
The variables inside the workflow are of types int and XDocument.
This is the code to initialize the WorkflowServiceHost:
WorkflowServiceHost serviceHost = new WorkflowServiceHost(myWorkflow, new Uri(serviceUri));
ServiceDebugBehavior debug = serviceHost.Description.Behaviors.Find<ServiceDebugBehavior>();
if (debug == null)
{
debug = new ServiceDebugBehavior();
serviceHost.Description.Behaviors.Add(debug);
}
debug.IncludeExceptionDetailInFaults = true;
WorkflowIdleBehavior idle = serviceHost.Description.Behaviors.Find<WorkflowIdleBehavior>();
if (idle == null)
{
idle = new WorkflowIdleBehavior();
serviceHost.Description.Behaviors.Add(idle);
}
idle.TimeToPersist = TimeSpan.FromSeconds(2);
idle.TimeToUnload = TimeSpan.FromSeconds(10);
var behavior = new SqlWorkflowInstanceStoreBehavior
{
ConnectionString = ConfigurationManager.ConnectionStrings["WorkflowPersistence"].ConnectionString,
InstanceEncodingOption = InstanceEncodingOption.None,
InstanceCompletionAction = InstanceCompletionAction.DeleteAll,
InstanceLockedExceptionAction = InstanceLockedExceptionAction.BasicRetry,
HostLockRenewalPeriod = new TimeSpan(00, 00, 30),
RunnableInstancesDetectionPeriod = new TimeSpan(00, 00, 05)
};
serviceHost.Description.Behaviors.Add(behavior);
serviceHost.Open();
Looking at the database, it seems that the workflow is never suspended.
Any help appreciated,
thank you
Not really sure what is going on here but it sounds like there are types used in the workflow that cannot be serialized and prevent the workflow from being stored to disk. When you say "Looking at the database, it seems that the workflow is never suspended." do you really mean suspended? And why do you expect the workflow to be suspended?
What happens if you send just the start message to the workflow and wait 2 seconds? Do you get a new record in the persistence database?