I've got an ASP.NET page with a simple form. The user fills out the form with some details, uploads a document, and some processing of the file then needs to happens on the server side.
My question is - what's the best approach to handling the server side processing of the files? The processing involves calling an exe. Should I use seperate threads for this?
Ideally I want the user to submit the form without the web page just hanging there while the processing takes place.
I've tried this code but my task never runs on the server:
Action<object> action = (object obj) =>
{
// Create a .xdu file for this job
string xduFile = launcher.CreateSingleJobBatchFile(LanguagePair, SourceFileLocation);
// Launch the job
launcher.ProcessJob(xduFile);
};
Task job = new Task(action, "test");
job.Start();
Any suggestions are appreciated.
You could invoke the processing functionality asynchronously in a classic fire and forget fashion:
In .NET 4.0 you should do this using the new Task Parallel Library:
Task.Factory.StartNew(() =>
{
// Do work
});
If you need to pass an argument to the action delegate you could do it like this:
Action<object> task = args =>
{
// Do work with args
};
Task.Factory.StartNew(task, "SomeArgument");
In .NET 3.5 and earlier you would instead do it this way:
ThreadPool.QueueUserWorkItem(args =>
{
// Do work
});
Related resources:
Task Class
ThreadPool.QueueUserWorkItem Method
Use:
ThreadPool.QueueUserWorkItem(o => MyFunc(arg0, arg1, ...));
Where MyFunc() does the server-side processing in the background after the user submits the page;
I have a site that does some potentially long running stuff that needs to be responsive and update a timer for the user.
My solution was to build a state machine into the page with a hidden value and some session values.
I have these on my aspx side:
<asp:Timer ID="Timer1" runat="server" Interval="1600" />
<asp:HiddenField runat="server" ID="hdnASynchStatus" Value="" />
And my code looks something like:
Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load
PostbackStateEngineStep()
UpdateElapsedTime()
End Sub
Private Sub PostbackStateEngineStep()
If hdnASynchStatus.Value = "" And Not CBool(Session("WaitingForCallback")) Then
Dim i As IAsyncResult = {...run something that spawns in it's own thread, and calls ProcessCallBack when it's done...}
Session.Add("WaitingForCallback", True)
Session.Add("InvokeTime", DateTime.Now)
hdnASynchStatus.Value = "WaitingForCallback"
ElseIf CBool(Session("WaitingForCallback")) Then
If Not CBool(Session("ProcessComplete")) Then
hdnASynchStatus.Value = "WaitingForCallback"
Else
'ALL DONE HERE
'redirect to the next page now
response.redirect(...)
End If
Else
hdnASynchStatus.Value = "DoProcessing"
End If
End Sub
Public Sub ProcessCallBack(ByVal ar As IAsyncResult)
Session.Add("ProcessComplete", True)
End Sub
Private Sub UpdateElapsedTime()
'update a label with the elapsed time
End Sub
Related
Especially if the signal processing needs to invoke an/some activities, how can I achieve that?
I tried to return data or exception but it doesn't work.
Data cannot be returned from signal method. Throwing exception will block workflow execution.
Common mistakes
It's wrong to return data in a signal method, or throw an exception -- because signal method is meant to be Asynchronous. The processing must be like Kafka processing messages and you can't return the result via the method returning.
So below code will NOT work:
public class SampleWorkflow{
public Result mySignalMethod(SignalRequest req){
Result result = activityStub.execute(req)
if(...){
throw new RuntimeException(...)
}
return result
}
}
What should you do
What you must do:
Make sure signal don't return anything
Use a query method to return the results
In signal method processing, store the results into workflow state so that query can return the states
A bonus if you also use the design pattern to store signal request into a queue, and let workflow method to process the signal. This will give you some benefits
Guarantee FIFO ordering of signal processing
Make sure reset workflow won't run into issues -- after reset, signals will be preserved and moved to earlier position of the workflow history. Sometimes workflow are not initialized to replay the signals.
Also make exception handling easier
See this design pattern in sample code: Cadence Java sample/Temporal java sample
If we applied all above, the sample code should be like below :
public class SampleWorkflow{
private Queue<SignalRequest> queue = new Queue<>();
private Response<Result> lastSignalResponse;
public void myWorkflowMethod(){
Async.procedure(
() -> {
while (true) {
Workflow.await(() -> !queue.isEmpty());
final SignalRequest req =
queue.poll();
// alternatively, you can use async to start an activity:
try{
Result result = activityStub.execute(req);
}catch (ActivityException e){
lastSignalResponse = new Response( e );
}
if(...){
lastSignalResponse = new Response( new RuntimeException(...) );
}else{
lastSignalResponse = new Response( result);
}
}
});
...
}
public Response myQueryMethod(){
return lastSignalResponse;
}
public Result mySignalMethod(SignalRequest req){
queue.add(req)
}
}
And in the application code, you should signal and then query the workflow to get the result:
workflowStub.mySignalMethod(req)
Response response = workflowStub.myQueryMethod()
Follow this sample-Cadence / sample-Temporal if you want to use aysnc activity
Why
Signal is executed via Workflow decision task(Workflow task in Temporal). A decision task cannot return result. In current design, there is no mechanism to let a decision task return result to application code.
Throw exception in workflow code will either block the decision task or fail the workflow).
Query method is designed to return result. -- However, query cannot schedule activity or modify workflow states.
It's a missing part to let app code to make a synchronous API call to update and return data. It needs a complicated design: https://github.com/temporalio/proposals/pull/53
I have 2 system which can communicate through API each other.
Here is my code
System A:
using (var transaction = new TransactionScope())
{
var myBook = _bookRepository.Table.FirstOrDefault(x => x.Id == request.bookID);
myBook.AssigneeId = null;
_bookRepository.Update(ticket);
var result = await _anotherBApi.ApproveBookAsync(request.bookID);
if (result.ShStatus != ResponseStatus.Success)
{
result.ErrorType = ErrorType.Error;
return result;
}
transaction.Complete();
}
Function ApproveBookAsync(request.bookID) will call to B system's API. After handling, B system call back A system's API to update Book's information (the same the one above).
Above my code. I cannot transaction.Complete(); because when B system call A system's API it will create new transaction.
Expect: I want to handle step by step as:
Update new information for a Book instance (sample ID = 1)
Call to B system's API (after B system also call A system's A to update Book ID = 1)
When call B system fail, I want to rollback all changes before. If success, commit.
When using async/await in TransactionScope block, you need to opt that you need your transaction to flow accross thread continuations like this:
using (var transaction = new TransactionScope(TransactionScopeAsyncFlowOption.Enabled))
{
// Your code that contains some calls to async method.
transaction.Complete();
}
I have never setup up a queueing system before. I decided to give it a shot. It seems the queueing system is working perfectly. However, it doesn't seem the data is being sent correctly. Here is my code.
...
$comment = new Comment(Input::all());
$comment->user_id = $user->id;
$comment->save();
if ($comment->isSaved())
{
$voters = $comment->argument->voters->unique()->toArray();
Queue::push('Queues\NewComment',
[
'comment' => $comment->load('argument', 'user')->toArray(),
'voters' => $voters
]
);
return Response::json(['success' => true, 'comment' => $comment->load('user')->toArray()]);
}
...
The class that handles this looks like this:
class NewComment {
public function fire($job, $data)
{
$comment = $data['comment'];
$voters = $data['voters'];
Log::info($data);
foreach ($voters as $voter)
{
if ($voter['id'] != $comment['user_id'])
{
$mailer = new NewCommentMailer($voter, $comment);
$mailer->send();
}
}
$job->delete();
}
}
This works beautifully on my local server using the synchronous queue driver. However, on my production server, I'm using Beanstalkd. The queue is firing like it is supposed to. However, I'm getting an error like this:
[2013-12-19 10:25:02] production.ERROR: exception 'ErrorException' with message 'Undefined index: voters' in /var/www/mywebsite/app/queues/NewComment.php:10
If I print out the $data variable passed into the NewComment queue handler, I get this:
[2013-12-19 10:28:05] production.INFO: {"comment":{"incrementing":true,"timestamps":true,"exists":true}} [] []
I have no clue why this is happening. Anyone have an idea how to fix this.
So $voters apparently isn't being put into the queue as part of the payload. I'd build the payload array outside of the Queue::push() function, log the contents, and see exactly what is being put in.
I've found if you aren't getting something out that you expect, chances are, it's not being put in like you expect either.
While you are at it, make sure that the beanstalkd system hasn't got old data stuck in it that is incorrect. You could add a timestamp into the payload to help make sure it's the latest data, and arrange to delete or bury any jobs that don't have the appropriate information - checked before you start to process them. Just looking at a count of items in the beanstalkd tubes should make it plain if there are stuck jobs.
I've not done anything with Laravel, but I have written many tasks for other Beanstalkd and SQS-backed systems, and the hard part is when the job fails, and you have to figure out what went wrong, and how to avoid just redoing the same failure over and over again.
What I ended up doing was sticking with simple numbers. I only stored the comment's ID on the queue and then did all the processing in my queue handler class. That was the easiest way to do it.
You will get data as expected in the handler by wrapping the data in an array:
array(
array('comment' => $comment->load('argument', 'user')->toArray(),
'voters' => $voters
)
)
I am currently using the Apex Workbook to refresh my knowledge of SalesForce.
Tutorial #15, Lesson 1: Offers the following code:
global class CleanUpRecords implements Database.Batchable<Object>
{
global final String query;
global CleanUpRecords (String q) {query = q;}
global Database.Querylocator start (Database.BatchableContext BC)
{
return Database.getQueryLocator(query);
}
global void execute (Database.BatchableContext BC, List<sObject> scope)
{
delete scope;
Database.emptyRecycleBin(scope);
}
global void finish(Database.BatchableContext BC)
{
AsyncApexJob a = [
SELECT Id, Status, NumberOfErrors, JobItemsProcessed, TotalJobItems, CreatedBy.Email
FROM AsyncApexJob
WHERE Id = :BC.getJobId()
];
// Send an email to the Apex job's submitter
// notifying of job completion.
Messaging.SingleEmailMessage mail = new Messaging.SingleEmailMessage();
String[] toAddresses = new String[] {a.CreatedBy.Email};
mail.setToAddresses(toAddresses);
mail.setSubject('Record Clean Up Completed ' + a.Status);
mail.setPlainTextBody (
'The batch Apex job processed ' + a.TotalJobItems +
' batches with '+ a.NumberOfErrors + ' failures.'
);
Messaging.sendEmail(new Messaging.SingleEmailMessage[] { mail });
}
}
However, regardless of which development interface (e.g. Force IDE, console, setup) I use, when I try to save this, I get:
Multiple markers at this line
- File only saved locally, not to server
- Save error: CleanUpRecords: Class must implement the global interface method: Iterable<Object> start(Database.BatchableContext) from Database.Batchable<Object>, CleanUpRecords: Class must implement the global interface method: void execute(Database.BatchableContext, LIST<Object>) from
Database.Batchable<Object>
(Or some equivalent, depending upon how I try to save it.)
However, it seems to me the required methods are already there.
What's missing?
Prepare to be frustrated ... there's just one character off.
Your class declaration should be:
global class CleanUpRecords implements Database.Batchable<sObject> {
instead of:
global class CleanUpRecords implements Database.Batchable<Object> {
I have a workflow started and persisted using messaging activities.
The correlation between the Start initial command and the Stop final command works well if they're sent within few seconds.
Problems begin when the workflow is unloaded, because the following Stop message throws the following FaultException:
If LoadWorkflowByInstanceKeyCommand.AssociateLookupKeyToInstanceId is not specified, the LookupInstanceKey must already be associated to an instance, or the LoadWorkflowByInstanceKeyCommand will fail. For this reason, it is invalid to also specify the LookupInstanceKey in the InstanceKeysToAssociate collection if AssociateLookupKeyToInstanceId isn't set
Can anybody help me?
The variables inside the workflow are of types int and XDocument.
This is the code to initialize the WorkflowServiceHost:
WorkflowServiceHost serviceHost = new WorkflowServiceHost(myWorkflow, new Uri(serviceUri));
ServiceDebugBehavior debug = serviceHost.Description.Behaviors.Find<ServiceDebugBehavior>();
if (debug == null)
{
debug = new ServiceDebugBehavior();
serviceHost.Description.Behaviors.Add(debug);
}
debug.IncludeExceptionDetailInFaults = true;
WorkflowIdleBehavior idle = serviceHost.Description.Behaviors.Find<WorkflowIdleBehavior>();
if (idle == null)
{
idle = new WorkflowIdleBehavior();
serviceHost.Description.Behaviors.Add(idle);
}
idle.TimeToPersist = TimeSpan.FromSeconds(2);
idle.TimeToUnload = TimeSpan.FromSeconds(10);
var behavior = new SqlWorkflowInstanceStoreBehavior
{
ConnectionString = ConfigurationManager.ConnectionStrings["WorkflowPersistence"].ConnectionString,
InstanceEncodingOption = InstanceEncodingOption.None,
InstanceCompletionAction = InstanceCompletionAction.DeleteAll,
InstanceLockedExceptionAction = InstanceLockedExceptionAction.BasicRetry,
HostLockRenewalPeriod = new TimeSpan(00, 00, 30),
RunnableInstancesDetectionPeriod = new TimeSpan(00, 00, 05)
};
serviceHost.Description.Behaviors.Add(behavior);
serviceHost.Open();
Looking at the database, it seems that the workflow is never suspended.
Any help appreciated,
thank you
Not really sure what is going on here but it sounds like there are types used in the workflow that cannot be serialized and prevent the workflow from being stored to disk. When you say "Looking at the database, it seems that the workflow is never suspended." do you really mean suspended? And why do you expect the workflow to be suspended?
What happens if you send just the start message to the workflow and wait 2 seconds? Do you get a new record in the persistence database?