Display progress when running long operation? - asp.net-mvc-2

in my ASP.NET MVC3 Project, I've got an action which runs a certain amount of time.
It would be nice, if it could send partial responses back to the view.
The goal would be to show the user some progress-information.
Has anybody a clue how to make that work?
I did a try with some direct output to the response, but it's not being sent to the client in parts but all on one block:
[HttpPost]
public string DoTimeConsumingThings(int someId)
{
for (int i = 0; i < 10; i++)
{
this.Response.Write(i.ToString());
this.Response.Flush();
Thread.Sleep(500); // Simulate time-consuming action
}
return "Done";
}
In the view:
#Ajax.ActionLink("TestLink", "Create", new AjaxOptions()
{ HttpMethod = "POST", UpdateTargetId="ProgressTarget" })<br />
<div id="ProgressTarget"></div>
Can anybody help me making progressive action-results?
Thanks!!

Here's how you could implement this: start by defining some class which will hold the state of the long running operation -> you will need properties such as the id, progress, result, ... Then you will need two controller actions: one which will start the task and another one which will return the progress. The Start action will spawn a new thread to execute the long running operation and return immediately. Once a task is started you could store the state of this operation into some common storage such as the Application given the task id.
The second controller action would be passed the task id and it will query the Application to fetch the progress of the given task. During that time the background thread will execute and every time it progresses it will update the progress of the task in the Application.
The last part is the client: you could poll the progress controller action at regular intervals using AJAX and update the progress.

Related

Can a child workflow be executed asynchronously?

I'm trying to implement a perpetual workflow that commences with an activity that blocks until a message is delivered (namely, Redis' BLPOP). Once it completes, I want to start a new workflow asynchronously to do some sort of processing and return ContinueAsNew immediately.
I've tried to start the processing workflow using child workflows. What I've observed is that my parent workflow completes before the child is executed. Unless I process the returned future, but I don't really want to do that.
What would be the right way to do this? Is it possible to start a new regular workflow within a workflow? Would such action be implemented as part of the workflow or within an activity?
Thank you in advance!
The solution is to wait for a child workflow to start before completing or continuing as new the parent.
If you are using the Go Cadence Client the workflow.ExecuteChildWorkflow returns a ChildWorkflowFuture which extends a Future that returns the child workflow result. It also has GetChildWorkflowExectution method that returns a Future that becomes ready as soon as the child is started. So to wait for a child workflow to start the following code can be used:
f := workflow.ExecuteChildWorklfow(ctx, childFunc)
var childWE WorkflowExecution
// The following line unblocks as soon as the child is started.
if err := f.GetChildWorkflowExecution().Get(&childWE); err != nil {
return err
}
Child workflow has started with Workflow ID found in childWE.ID and Run ID in childWE.RunID
The Java equivalent is:
ChildType child = Workflow.newChildWorkflowStub(ChildType.class);
// result promise becomes ready when the child completes
Promise<String> result = Async.function(child::executeMethod);
// childWE promise becomes ready as soon as the child is started
Promise<WorkflowExecution> childWE = Workflow.getWorkflowExecution(child);

(Laravel 5) Monitor and optionally cancel an ALREADY RUNNING job on queue

I need to achieve the ability to monitor and be able to cancel an ALREADY RUNNING job on queue.
There's a lot of answers about deleting QUEUED jobs, but not on an already running one.
This is the situation: I have a "job", which consists of HUNDREDS OF THOUSANDS rows on a database, that need to be queried ONE BY ONE against a web service.
Every row needs to be picked up, queried against a web service, stored the response and its status updated.
I had that already working as a Command (launching from / outputting to console), but now I need to implement queues in order to allow piling up more jobs from more users.
So far I've seen Horizon (which doesn't runs on Windows due to missing process control libs). However, in some demos seen around it lacks (I believe) a couple things I need:
Dynamically configurable timeout (the whole job may take more than 12 hours, depending on the number of rows to process on the selected job)
Ability to CANCEL an ALREADY RUNNING job.
I also considered the option to generate EACH REQUEST as a new job instead of seeing a "job" as the whole collection of rows (this would overcome the timeout thing), but that would give me a Horizon "pending jobs" list of hundreds of thousands of records per job, and that would kill the browser (I know Redis can handle this without itching at all). Further, I guess is not possible to cancel "all jobs belonging to X tag".
I've been thinking about hitting an API route, fire the job and decouple it from the app, but I'm seeing that this requires forking processes.
For the ability to cancel, I would implement a database with job_id, and when the user hits an API to cancel a job, I'd mark it as "halted". On every loop I would check its status and if it finds "halted" then kill itself.
If I've missed any aspect just holler and I'll add it or clarify about it.
So I'm asking for an advice here since I'm new to Laravel: how could I achieve this?
So I finally came up with this (a bit clunky) solution:
In Controller:
public function cancelJob()
{
$jobs = DB::table('jobs')->get();
# I could use a specific ID and user owner filter, etc.
foreach ($jobs as $job) {
DB::table('jobs')->delete($job->id);
}
# This is a file that... well, it's self explaining
touch(base_path(config('files.halt_process_signal')));
return "Job cancelled - It will stop soon";
}
In job class (inside model::chunk() function)
# CHECK FOR HALT SIGNAL AND [OPTIONALLY] STOP THE PROCESS
if ($this->service->shouldHaltProcess()) {
# build stats, do some cleanup, log, etc...
$this->halted = true;
$this->service->stopProcess();
# This FALSE is what it makes the chunk() method to stop looping
return false;
}
In service class:
/**
* Checks the existence of the 'Halt Process Signal' file
*
* #return bool
*/
public function shouldHaltProcess() :bool
{
return file_exists($this->config['files.halt_process_signal']);
}
/**
* Stop the batch process
*
* #return void
*/
public function stopProcess() :void
{
logger()->info("=== HALT PROCESS SIGNAL FOUND - STOPPING THE PROCESS ===");
$this->deleteHaltProcessSignalFile();
return ;
}
It doesn't looks quite elegant, but it works.
I've surfed the whole web and many goes for Horizon or other tools that doesn't fit my case.
If anyone has a better way to achieve this, it's welcome to share.
Laravel queue have 3 important config:
1. retry_after
2. timeout
3. tries
See more: https://laravel.com/docs/5.8/queues
Dynamically configurable timeout (the whole job may take more than 12
hours, depending on the number of rows to process on the selected job)
I think you can config timeout + retry_after about 24h.
Ability to CANCEL an ALREADY RUNNING job.
Delete job in jobs table
Delete process by process id in your server
Hope it help you :)

Eclipse Plugin Cancel Completely

When in an eclipse plugin you implement Job and Override the run()-method, you can make changes to the parameter IProgressMonitor and skip tasks if the user pushed Cancel like this:
if (!monitor.isCanceled()){
monitor.subTask("Doing stuff");
//do task
} else {
returnedStatus = Status.CANCEL_STATUS;
}
But that means that at least the currently active task has to be finished before skipping the rest. Is there any way to completely abort the plugin activity and execute a finally block when the user pushes cancel, without waiting for the next if (!monitor.isCanceled()) and without subdividing your whole program into subTasks?
No. Your Job has to be the one to react to cancellation, so you need to either break the job up into tasks for which you can report progress with worked() and check cancellation, or you have to send around sub-progressmonitors and do the same thing.
https://eclipse.org/articles/Article-Progress-Monitors/article.html

How to know any UI rendering is completed in automation code

I am wanting to know a button is rendered on main window UI or not. This button rendering is depending on server response result (written in Objective C). If server response comes perfectly it becomes render perfectly (VISIBLE) otherwise it is not present there (INVISIBLE). And whenever it becomes visible I always tap on it for further next process.
I wrote code
UIATarget.localTarget().pushTimeout(200);
//My code
UIATarget.localTarget().popTimeout();
By the above code I have to wait till 200 sec but my concern is I want to wait but whenever object is on screen I don't want keep me busy in WAITING MODE.
How will I write code in automation?
Thanks
Ok, this might give you idea how to follow-up:
For your view implement an accessibilityValue method which returns a JSON formatted value:
- (NSString *)accessibilityValue
{
return [NSString stringWithFormat:
#"{'MyButtonisVisible':%#}",
self.MyButton.isHidden ? #"false" : #"true"];
}
Then somehow you can access it from your test javascript:
var thisproperty = eval("(" + element.value() + ")");
if (thisproperty.MyButtonisVisible) {
UIATarget.localTarget().tap({"x":100, "y":100});
}
Hope that helps.
If you make the name different when you enable the button you can do this:
var awesomeButton = target.frontMostApp().mainWindow().buttons()[0];
UIATarget.localTarget().pushTimeout(200);
awesomeButton.withName("My Awesome Button");
if (awesomeButton.isVisible()) {
UIALogger.logError("Error no awesome button!");
}
UIATarget.localTarget().popTimeout();
withName will repeatedly test the name and control will return to your script once the name matches or when the time out is reached.
Per Apple's Doc
withName:
Tests if the name attribute of the element has the given string value. If the match fails, the test is retried until the current timeout expires.
Timeout Periods:
If the action completes during the timeout period, that line of code returns, and your script can proceed. If the action doesn’t complete during the timeout period, an exception is thrown.
https://developer.apple.com/library/etc/redirect/xcode/ios/e808aa/documentation/DeveloperTools/Conceptual/InstrumentsUserGuide/UsingtheAutomationInstrument/UsingtheAutomationInstrument.html#//apple_ref/doc/uid/TP40004652-CH20

Play 1.2.3 framework - Right way to commit transaction

We have a HTTP end-point that takes a long time to run and can also be called concurrently by users. As part of this request, we update the model inside a synchronized block so that other (possibly concurrent) requests pick up that change.
E.g.
MyModel m = null;
synchronized (lockObject) {
m = MyModel.findById(id);
if (m.status == PENDING) {
m.status = ACTIVE;
} else {
//render a response back to user that the operation is not allowed
}
m.save(); //Is not expected to be called unless we set m.status = ACTIVE
}
//Long running operation continues here. It can involve further changes to instance "m"
The reason for the synchronized block is to ensure that even concurrent requests get to pick up the latest status. However, the underlying JPA does not commit my changes (m.save()) until the request is complete. Since this is a long-running request, I do not want to wait until the request is complete and still want to ensure that other callers are notified of the change in status. I tried to call "m.em().flush(); JPA.em().getTransaction().commit();" after m.save(), but that makes the transaction unavailable for the subsequent action as part of the same request. Can I just given "JPA.em().getTransaction().begin();" and let Play handle the transaction from then on? If not, what is the best way to handle this use-case?
UPDATE:
Based on the response, I modified my code as follows:
MyModel m = null;
synchronized (lockObject) {
m = MyModel.findById(id);
if (m.status == PENDING) {
m.status = ACTIVE;
} else {
//render a response back to user that the operation is not allowed
}
m.save(); //Is not expected to be called unless we set m.status = ACTIVE
}
new MyModelUpdateJob(m.id).now();
And in my job, I have the following line:
doJob() {
MyModel m = MyModel.findById(id);
print m.status; //This still prints the old status as-if m.save() had no effect...
}
What am I missing?
Put your update code in a job an call
new MyModelUpdateJob(id).now().get();
thus the update will be done in another transaction that is commited at the end of the job
ouch, as soon as you add more play servers, you will be in trouble. You may want to play with optimistic locking in your example or and I advise against it pessimistic locking....ick.
HOWEVER, looking at your code, maybe read the article Building on Quicksand. I am not sure you need a synchronized block in that case at all...try to go after being idempotent.
In your case if
1. user 1 and user 2 both call that method and it is pending, then it goes to active(Idempotent)
If user 1 or user 2 wins, well that would be like you had the synchronization block anyways.
I am sure however you have a more complex scenario not shown here, BUT READ that article Building on Quicksand as it really changes the traditional way of thinking and is how google and amazon and very large scale systems operate.
Another option for distributed transactions across play servers is zookeeper which the big large nosql guys use BUT only as a last resort ;) ;)
later,
Dean