I have a workflow started and persisted using messaging activities.
The correlation between the Start initial command and the Stop final command works well if they're sent within few seconds.
Problems begin when the workflow is unloaded, because the following Stop message throws the following FaultException:
If LoadWorkflowByInstanceKeyCommand.AssociateLookupKeyToInstanceId is not specified, the LookupInstanceKey must already be associated to an instance, or the LoadWorkflowByInstanceKeyCommand will fail. For this reason, it is invalid to also specify the LookupInstanceKey in the InstanceKeysToAssociate collection if AssociateLookupKeyToInstanceId isn't set
Can anybody help me?
The variables inside the workflow are of types int and XDocument.
This is the code to initialize the WorkflowServiceHost:
WorkflowServiceHost serviceHost = new WorkflowServiceHost(myWorkflow, new Uri(serviceUri));
ServiceDebugBehavior debug = serviceHost.Description.Behaviors.Find<ServiceDebugBehavior>();
if (debug == null)
{
debug = new ServiceDebugBehavior();
serviceHost.Description.Behaviors.Add(debug);
}
debug.IncludeExceptionDetailInFaults = true;
WorkflowIdleBehavior idle = serviceHost.Description.Behaviors.Find<WorkflowIdleBehavior>();
if (idle == null)
{
idle = new WorkflowIdleBehavior();
serviceHost.Description.Behaviors.Add(idle);
}
idle.TimeToPersist = TimeSpan.FromSeconds(2);
idle.TimeToUnload = TimeSpan.FromSeconds(10);
var behavior = new SqlWorkflowInstanceStoreBehavior
{
ConnectionString = ConfigurationManager.ConnectionStrings["WorkflowPersistence"].ConnectionString,
InstanceEncodingOption = InstanceEncodingOption.None,
InstanceCompletionAction = InstanceCompletionAction.DeleteAll,
InstanceLockedExceptionAction = InstanceLockedExceptionAction.BasicRetry,
HostLockRenewalPeriod = new TimeSpan(00, 00, 30),
RunnableInstancesDetectionPeriod = new TimeSpan(00, 00, 05)
};
serviceHost.Description.Behaviors.Add(behavior);
serviceHost.Open();
Looking at the database, it seems that the workflow is never suspended.
Any help appreciated,
thank you
Not really sure what is going on here but it sounds like there are types used in the workflow that cannot be serialized and prevent the workflow from being stored to disk. When you say "Looking at the database, it seems that the workflow is never suspended." do you really mean suspended? And why do you expect the workflow to be suspended?
What happens if you send just the start message to the workflow and wait 2 seconds? Do you get a new record in the persistence database?
Related
I'm supporting a production Xamarin Forms app with offline sync feature implemented using Azure Mobile Services.
We have a lot of production issues related to users losing data or general instability that goes away if the reinstall the app. After having a look through, I think the issues are around how the conflict resolution is handled in the app.
For every entity that tries to sync we handle MobileServicePushFailedException and then traverse through the errors returned and take action.
catch (MobileServicePushFailedException ex)
{
foreach (var error in ex.PushResult.Errors) // These are MobileServiceTableOpearationErrors
{
var status = error.Status; // HttpStatus code returned
// Take Action based on this status
// If its 409 or 412, we go in to conflict resolving and tries to decide whether the client or server version wins
}
}
The conflict resolving seems too custom to me and I'm checking to see whether there are general guidelines.
For example, we seem to be getting empty values for 'CreatedAt' & 'UpdatedAt' timestamps for local and server versions of the entities returned, which is weird.
var serverItem = error.Result;
var clientItem = error.Item;
// sometimes serverItem.UpdatedAt or clientItem.UpdatedAt is NULL. Since we use these 2 fields to determine who wins, we are stumped here
If anyone can point me to some guideline or sample code on how these conflicts should be generally handled using information from the MobileServiceTableOperationError, that will be highly appreciated
I came across the following code snippet from the following doc.
// Simple error/conflict handling.
if (syncErrors != null)
{
foreach (var error in syncErrors)
{
if (error.OperationKind == MobileServiceTableOperationKind.Update && error.Result != null)
{
//Update failed, reverting to server's copy.
await error.CancelAndUpdateItemAsync(error.Result);
}
else
{
// Discard local change.
await error.CancelAndDiscardItemAsync();
}
Debug.WriteLine(#"Error executing sync operation. Item: {0} ({1}). Operation discarded.",
error.TableName, error.Item["id"]);
}
}
Surfacing conflicts to the UI I found in this doc
private async Task ResolveConflict(TodoItem localItem, TodoItem serverItem)
{
//Ask user to choose the resolution between versions
MessageDialog msgDialog = new MessageDialog(
String.Format("Server Text: \"{0}\" \nLocal Text: \"{1}\"\n",
serverItem.Text, localItem.Text),
"CONFLICT DETECTED - Select a resolution:");
UICommand localBtn = new UICommand("Commit Local Text");
UICommand ServerBtn = new UICommand("Leave Server Text");
msgDialog.Commands.Add(localBtn);
msgDialog.Commands.Add(ServerBtn);
localBtn.Invoked = async (IUICommand command) =>
{
// To resolve the conflict, update the version of the item being committed. Otherwise, you will keep
// catching a MobileServicePreConditionFailedException.
localItem.Version = serverItem.Version;
// Updating recursively here just in case another change happened while the user was making a decision
UpdateToDoItem(localItem);
};
ServerBtn.Invoked = async (IUICommand command) =>
{
RefreshTodoItems();
};
await msgDialog.ShowAsync();
}
I hope this helps provide some direction. Although the Azure Mobile docs have been deprecated, the SDK hasn't changed and should still be relevant. If this doesn't help, let me know what you're using for a backend store.
I am working in a Service Fabric application that uses IReliableQueue. For the uses cases of this system, the IReliableConcurrentQueue makes sense to use and some local testing (i.e. basically by just changing the code to use IReliableConcurrentQueue instead of IReliableQueue - queue name does not change) shows great performance improvements. However, I am worried about the impact of changing this in a production system (i.e. upgrading). I can't find any docs or online questions (unless I just missed them) about these considerations. For example, in this system, the existing IReliableQueue will almost always have items. So what happens to that data when I upgrade the SF application? Will it be available to dequeue in the IReliableConcurrentQueue? Or would data be lost? I know I can "just try it" but wanted to see if someone out there had done the same or could offer pointers to existing resources. Thanks!
Sorry for a late answer (that you probably don't need anymore but still).
When we calling GetOrAddAsync method on IReliableStateManager we aren't retrieving the interface to store values - we actually creating an instance of reliable collection. This basically means that type of the interface we specify is very important.
Taking this into account if we do this:
Service v. 1.0
// Somewhere in RunAsync for example
await this.StateManager.GetOrAddAsync<IReliableQueue<long>>("MyCollection")
Then doing this in the next version:
Service v. 1.1
// Somewhere in RunAsync for example
await this.StateManager.GetOrAddAsync<IReliableConcurrentQueue<long>>("MyCollection")
will throw an exception:
Returned reliable object of type Microsoft.ServiceFabric.Data.Collections.DistributedQueue`1[System.Int64] cannot be casted to requested type Microsoft.ServiceFabric.Data.Collections.IReliableConcurrentQueue`1[System.Int64]
and then:
System.ExecutionEngineException: 'Exception of type 'System.ExecutionEngineException' was thrown.'
The above exception looks like a bug so I have filled one.
UPDATE 2019.06.28
It turned out that appearance of System.ExecutionEngineException isn't a bug but rather an undocumented behavior of Environment.FailFast method in combination with Visual Studio debugger.
Please see my comment to the above issue.
This is what would happen.
There are plenty ways to overcome this.
Here is the most obvious one:
Example
var migrate = false; // This flag indicates whether the migration was already done.
var migrateValues = new List<long>();
var applicationFlags = await this.StateManager
.GetOrAddAsync<IReliableDictionary<string, bool>>("application-flags");
using (var transaction = this.StateManager.CreateTransaction())
{
var flag = await applicationFlags
.TryGetValueAsync(transaction, "queue-to-concurrent-queue-migration");
if (!flag.HasValue || !flag.Value)
{
var queue = await this.StateManager
.GetOrAddAsync<IReliableQueue<long>>("value-collection");
for (;;)
{
var c = await queue.TryDequeueAsync(transaction);
if (!c.HasValue)
{
break;
}
migrateValues.Add(c.Value);
}
migrate = true;
}
}
if (migrate)
{
await this.StateManager.RemoveAsync("value-collection");
using (var transaction = this.StateManager.CreateTransaction())
{
var concurrentQueue = await this.StateManager
.GetOrAddAsync<IReliableConcurrentQueue<long>>("value-collection");
foreach (var i in migrateValues)
{
await concurrentQueue.EnqueueAsync(transaction, i);
}
await applicationFlags.AddOrUpdateAsync(
transaction,
"queue-to-concurrent-queue-migration",
true,
(s, b) => true);
}
await transaction.CommitAsync();
}
Please note that this code is just an illustrative example and should be properly tested before applying it to real life application.
I have a C# library which connects to 59 servers of the same database structure and imports data to my local db to the same table. At this moment I am retrieving data server by server in foreach loop:
foreach (var systemDto in systems)
{
var sourceConnectionString = _systemService.GetConnectionStringAsync(systemDto.Ip).Result;
var dbConnectionFactory = new DbConnectionFactory(sourceConnectionString,
"System.Data.SqlClient");
var dbContext = new DbContext(dbConnectionFactory);
var storageRepository = new StorageRepository(dbContext);
var usedStorage = storageRepository.GetUsedStorageForCurrentMonth();
var dtUsedStorage = new DataTable();
dtUsedStorage.Load(usedStorage);
var dcIp = new DataColumn("IP", typeof(string)) {DefaultValue = systemDto.Ip};
var dcBatchDateTime = new DataColumn("BatchDateTime", typeof(string))
{
DefaultValue = batchDateTime
};
dtUsedStorage.Columns.Add(dcIp);
dtUsedStorage.Columns.Add(dcBatchDateTime);
using (var blkCopy = new SqlBulkCopy(destinationConnectionString))
{
blkCopy.DestinationTableName = "dbo.tbl";
blkCopy.WriteToServer(dtUsedStorage);
}
}
Because there are many systems to retrieve data, I wonder if it is possible to use Pararel.Foreach loop? Will BulkCopy lock the table during WriteToServer and next WriteToServer will wait until previous will complete?
-- EDIT 1
I've changed Foreach to Parallel.Foreach but I face one problem. Inside this loop I have async method: _systemService.GetConnectionStringAsync(systemDto.Ip)
and this line returns error:
System.NotSupportedException: A second operation started on this
context before a previous asynchronous operation completed. Use
'await' to ensure that any asynchronous operations have completed
before calling another method on this context. Any instance members
are not guaranteed to be thread safe.
Any ideas how can I handle this?
In general, it will get blocked and will wait until the previous operation complete.
There are some factors that may affect if SqlBulkCopy can be run in parallel or not.
I remember when adding the Parallel feature to my .NET Bulk Operations, I had hard time to make it work correctly in parallel but that worked well when the table has no index (which is likely never the case)
Even when worked, the performance gain was not a lot faster.
Perhaps you will find more information here: MSDN - Importing Data in Parallel with Table Level Locking
I am bit new to Windows Workflow foundation so it might be a very straight forward, but I am stuck with it. I've a very simple sequential workflow and there are couple of code activities that are inside a Transaction Scope Activity.
I am running my workflow from Console application having following code:
Activity workflow = new Process();
var inputArgument = new Dictionary<string, object>();
inputArgument["Argument 1"] = 1234567;
inputArgument["Argument 2"] = 1234567;
inputArgument["Argument 3"] = "GUID";
inputArgument["Aggument 4"] = #"\\filepath\";
var syncEvent = new AutoResetEvent(false);
var workflowApp = new WorkflowApplication(workflow, inputArgument);
workflowApp.OnUnhandledException =
delegate (WorkflowApplicationUnhandledExceptionEventArgs e)
{
return UnhandledExceptionAction.Terminate;
};
workflowApp.Completed +=
delegate (WorkflowApplicationCompletedEventArgs e)
{
syncEvent.Set();
};
workflowApp.Run();
syncEvent.WaitOne();
If I don't add Transaction Scope activity my workflow runs fine and in case of exception the workflow instance terminates and my console application close as well.
However, when I add Transaction Scope activity and if any activity fails inside Transaction Scope then my workflow instance keep running as well as my console. Can any one guide me how to terminate the instance?
I am not handling any exception within my workflow and want it to be like that so that I can log the exception details.
If you go to properties on the TransactionScope in the Workflow there is a property that is set to true by default called AbortInstanceOnTransactionFailure. Set that to false. It should then behave as you're expecting.
When this is enabled it will cause the workflow instance to abort but not terminate.
I'm finding this incredibly frustrating. I'm trying to use the InventoryFacadeClient to call either the Change or Sync web services to update product availability. The issue I'm facing is that I can't seem to instantiate all of the required DataTypes to populate the request.
It's quite confusing, I wanted to call ChangeInventory but can't compose the request, and started down SyncProductAvailability but again, can't compose the request.
The problem below is that the ProductIdentifierType is null, and there's no corresponding "createProductIdentifierType" on the Factory....I'm not sure what I"m missing here, the factory seems to be half baked...
If someone can help me complete this code, it would be great?
public void setUp() throws Exception {
String METHOD_NAME = "setUp";
logger.info("{} entering", METHOD_NAME);
super.setUp();
InventoryFacadeClient iClient = super.initializeInventoryClient(false);
InventoryFactory f = com.ibm.commerce.inventory.datatypes.InventoryFactory.eINSTANCE;
com.ibm.commerce.inventory.facade.datatypes.InventoryFactory cf = iClient.getInventoryFactory();
CommerceFoundationFactory fd = iClient.getCommerceFoundationFactory();
// we must have customised the SyncProductAvailability web service to
// handle ATP inventory model.
SyncProductAvailabilityDataAreaType dataArea = f.createSyncProductAvailabilityDataAreaType();
SyncProductAvailabilityType sat = f.createSyncProductAvailabilityType();
sat.setDataArea(dataArea);
DocumentRoot root = cf.createDocumentRoot();
sat.setVersionID(root.getInventoryAvailabilityBODVersion());
ProductAvailabilityType pat = f.createProductAvailabilityType();
ProductIdentifierType pid = pat.getProductIdentifier();
I found the answer to this on another forum. I was missing the right CommerceFoundationFactory - the class the ProductIdentifierType is created from is:
com.ibm.commerce.foundation.datatypes.CommerceFoundationFactory fd2 = com.ibm.commerce.foundation.datatypes.CommerceFoundationFactory.eINSTANCE;
fd2.createProductIdentifierType