Eclipse Plugin Cancel Completely - eclipse

When in an eclipse plugin you implement Job and Override the run()-method, you can make changes to the parameter IProgressMonitor and skip tasks if the user pushed Cancel like this:
if (!monitor.isCanceled()){
monitor.subTask("Doing stuff");
//do task
} else {
returnedStatus = Status.CANCEL_STATUS;
}
But that means that at least the currently active task has to be finished before skipping the rest. Is there any way to completely abort the plugin activity and execute a finally block when the user pushes cancel, without waiting for the next if (!monitor.isCanceled()) and without subdividing your whole program into subTasks?

No. Your Job has to be the one to react to cancellation, so you need to either break the job up into tasks for which you can report progress with worked() and check cancellation, or you have to send around sub-progressmonitors and do the same thing.
https://eclipse.org/articles/Article-Progress-Monitors/article.html

Related

How to interrupt the awaiting in a `for-await` loop / Swift

I'm wondering how to stop a for-await loop from "awaiting".
Here's the loop. I use it to listen to new Transactions with storekit2:
transactionListener = Task(priority: .background) { [self] in
// wait for transactions and process them as they arrive
for await verificationResult in Transaction.updates {
if Task.isCancelled { print("canceled"); break }
// --- do some funny stuff with the transaction here ---
await transaction.finish()
}
print("done")
}
As you can see, Transaction.updates is awaited and returns a new transaction whenever one is created. When the App finishes, I cancel the loop with transactionListener.cancel() - but the cancel is ignored as the Transaction.updates is waiting for the next transaction to deliver and there's no direct way in the API to stop it (like, e.g. Task.sleep() does)
The issues starts, when I run unit-tests. The listener from a previous test is still listening while the next test is already running. This produces very unreliable test results and crashes our CI/CD pipeline. I nailed it down to the shown piece of code and the described issue.
So, question: Is it possible to interrupt a for-await loop from awaiting? I have something like the Unix/Linux command kill -1 in mind. Any ideas?

(Laravel 5) Monitor and optionally cancel an ALREADY RUNNING job on queue

I need to achieve the ability to monitor and be able to cancel an ALREADY RUNNING job on queue.
There's a lot of answers about deleting QUEUED jobs, but not on an already running one.
This is the situation: I have a "job", which consists of HUNDREDS OF THOUSANDS rows on a database, that need to be queried ONE BY ONE against a web service.
Every row needs to be picked up, queried against a web service, stored the response and its status updated.
I had that already working as a Command (launching from / outputting to console), but now I need to implement queues in order to allow piling up more jobs from more users.
So far I've seen Horizon (which doesn't runs on Windows due to missing process control libs). However, in some demos seen around it lacks (I believe) a couple things I need:
Dynamically configurable timeout (the whole job may take more than 12 hours, depending on the number of rows to process on the selected job)
Ability to CANCEL an ALREADY RUNNING job.
I also considered the option to generate EACH REQUEST as a new job instead of seeing a "job" as the whole collection of rows (this would overcome the timeout thing), but that would give me a Horizon "pending jobs" list of hundreds of thousands of records per job, and that would kill the browser (I know Redis can handle this without itching at all). Further, I guess is not possible to cancel "all jobs belonging to X tag".
I've been thinking about hitting an API route, fire the job and decouple it from the app, but I'm seeing that this requires forking processes.
For the ability to cancel, I would implement a database with job_id, and when the user hits an API to cancel a job, I'd mark it as "halted". On every loop I would check its status and if it finds "halted" then kill itself.
If I've missed any aspect just holler and I'll add it or clarify about it.
So I'm asking for an advice here since I'm new to Laravel: how could I achieve this?
So I finally came up with this (a bit clunky) solution:
In Controller:
public function cancelJob()
{
$jobs = DB::table('jobs')->get();
# I could use a specific ID and user owner filter, etc.
foreach ($jobs as $job) {
DB::table('jobs')->delete($job->id);
}
# This is a file that... well, it's self explaining
touch(base_path(config('files.halt_process_signal')));
return "Job cancelled - It will stop soon";
}
In job class (inside model::chunk() function)
# CHECK FOR HALT SIGNAL AND [OPTIONALLY] STOP THE PROCESS
if ($this->service->shouldHaltProcess()) {
# build stats, do some cleanup, log, etc...
$this->halted = true;
$this->service->stopProcess();
# This FALSE is what it makes the chunk() method to stop looping
return false;
}
In service class:
/**
* Checks the existence of the 'Halt Process Signal' file
*
* #return bool
*/
public function shouldHaltProcess() :bool
{
return file_exists($this->config['files.halt_process_signal']);
}
/**
* Stop the batch process
*
* #return void
*/
public function stopProcess() :void
{
logger()->info("=== HALT PROCESS SIGNAL FOUND - STOPPING THE PROCESS ===");
$this->deleteHaltProcessSignalFile();
return ;
}
It doesn't looks quite elegant, but it works.
I've surfed the whole web and many goes for Horizon or other tools that doesn't fit my case.
If anyone has a better way to achieve this, it's welcome to share.
Laravel queue have 3 important config:
1. retry_after
2. timeout
3. tries
See more: https://laravel.com/docs/5.8/queues
Dynamically configurable timeout (the whole job may take more than 12
hours, depending on the number of rows to process on the selected job)
I think you can config timeout + retry_after about 24h.
Ability to CANCEL an ALREADY RUNNING job.
Delete job in jobs table
Delete process by process id in your server
Hope it help you :)

RxJava2 action not executed if user leaves screen

I do a remove action through RxJava2 that causes a refresh on my local cache like this:
override fun removeExperience(experienceId: String, placeId: String): Completable {
return from(placesApi.deleteExperience(experienceId, placeId))
.andThen(from(refreshPlace(placeId))
.flatMapCompletable { Completable.complete() }
)
}
so whenever the remove action is done (Completable is complete), a refresh is triggered. The problem is, sometimes this remove action takes long enough for users to just leave the screen, and then the andThen action is never executed cause there is no subscribers anymore, and thus the information on the screen is not up to date anymore.
Is there a way to enforce this action to take place?
Does this logic continue working when user open the same screen again? If so, then you only need to finish subscription from(placesApi.deleteExperience(experienceId, placeId)) on lifecycle events. The easiest way is to add the whole subscription removeExperience() to Disposable or CompositeDisposable and then trigger its .dispose() or .clear() on view stop or destroy events.
.dispose() - doesn't allow to use the same subscription stored.
.clear() - allows re-subscription without creating the new
subscription instance

Background Process as NSOperation or Thread to monitor and update File

I want to check if a pdf file is changed or not, and if is changed i want to update the corresponding view. I don't know if it's more suitable to use a background process as a Thread or as an NSOperation to do this task. The Apple Documentation says: "Examples of tasks that lend themselves well to NSOperation include network requests, image resizing, text processing, or any other repeatable, structured, long-running task that produces associated state or data.But simply wrapping computation into an object doesn’t do much without a little oversight".
Also, if I understood correctly from the documentation, a Thread once started can't be stopped during his execution while an NSOperation could be paused or stopped and also they could rely on dependency to wait the completion of another task.
The workflow of this task should be more or less this diagram:
Task workflow
I managed to get the handler working after the notification of type .write has been sent. If i monitor for example a *.txt file everything works as expected and i receive only one notification. But i am monitoring a pdf file which is generated from terminal by pdflatex and thus i receive with '.write' nearly 15 notification. If i change to '.attrib' i get 3 notification. I need the handler to be called only once, not 15 or 3 times. Do you have any idea how can i do it or is not possible with a Dispatch Source? Maybe there is a way to execute a dispatchWorkItem only once?
I have tried to implement it like this(This is inside a FileMonitor class):
func startMonitoring()
{
....
let fileSystemRepresentation = fileManager.fileSystemRepresentation(withPath: fileStringURL)
let fileDescriptor = open(fileSystemRepresentation, O_EVTONLY)
let newfileMonitorSource = DispatchSource.makeFileSystemObjectSource(fileDescriptor: fileDescriptor,
eventMask: .attrib,
queue: queue)
newfileMonitorSource.setEventHandler(handler:
{
self.queue.async
{
print(" \n received first write event, removing handler..." )
self.newfileMonitorSource.setEventHandler(handler: nil)
self.test()
}
})
self.fileMonitorSource = newfileMonitorSource
fileMonitorSource!.resume()
}
func test()
{
fileMonitorSource?.cancel()
print(" restart monitoring ")
startMonitoring()
}
I have tried to reassign the handler in test(), but it's not working(if a regenerate the pdf file, what is inside the new handler it's not executed) and to me, doing in this way, it seems a bit boilerplate code. I have also tried the following things:
suspend the DispatchSource in the setEventHandler of startMonitoring() (passing nil), but then when i am resuming it, i get the remaining .write events.
cancel the DispatchSource object and recall the startMonitoring() as you can see in the code above, but in this way i create and destroy the DispatchSource object everytime i receive an event, which i don't like because the cancel() function shoul be called in my case only when the user decide to disable this feauture i am implementing.
I will try to write better how the workflow of the app should be so you can have an more clear idea of what i am doing:
When the app starts, a functions sets the default value of some checkboxes of the window preference. The user can modify this checkboxes. So when the user open a pdf file, the idea is to launch in a background thread the following task:
I create a new queue call it A and launch asynch an infinite while where i check the value of the UserDefault checkboxe (that i use to reload and update the pdf file) and two things could happen
if the user set the value to off and the pdf document has been loaded there could be two situations:
if there is no current monitoring of the file (when the app starts): continue to check the checkboxe value
if there is currently a monitoring of the file: stop it
if the user set value to on and the pdf document has been loaded in this background thread (the same queue A) i will create a class Monitor (that could be a subclass of NSThread or a class that uses DispatchSourceFileSystemObject like above), then i will call startMonitoring() that will check the date or .write events and when there is a change it will call the handler. Basically this handler should recall the main thread (the main queue) and check if the file can be loaded or is corrupted and if so update the view.
Note: The infinite while loop(that should be running in the background), that check the UserDefault related to the feature i am implementing it's launched when the user open the pdf file.
Because of the problem above (multiple handlers calls), i should use the cancel() function when the user set checkboxe to off, and not create/destroy the DispatchSource object everytime i receive a .write event.

Display progress when running long operation?

in my ASP.NET MVC3 Project, I've got an action which runs a certain amount of time.
It would be nice, if it could send partial responses back to the view.
The goal would be to show the user some progress-information.
Has anybody a clue how to make that work?
I did a try with some direct output to the response, but it's not being sent to the client in parts but all on one block:
[HttpPost]
public string DoTimeConsumingThings(int someId)
{
for (int i = 0; i < 10; i++)
{
this.Response.Write(i.ToString());
this.Response.Flush();
Thread.Sleep(500); // Simulate time-consuming action
}
return "Done";
}
In the view:
#Ajax.ActionLink("TestLink", "Create", new AjaxOptions()
{ HttpMethod = "POST", UpdateTargetId="ProgressTarget" })<br />
<div id="ProgressTarget"></div>
Can anybody help me making progressive action-results?
Thanks!!
Here's how you could implement this: start by defining some class which will hold the state of the long running operation -> you will need properties such as the id, progress, result, ... Then you will need two controller actions: one which will start the task and another one which will return the progress. The Start action will spawn a new thread to execute the long running operation and return immediately. Once a task is started you could store the state of this operation into some common storage such as the Application given the task id.
The second controller action would be passed the task id and it will query the Application to fetch the progress of the given task. During that time the background thread will execute and every time it progresses it will update the progress of the task in the Application.
The last part is the client: you could poll the progress controller action at regular intervals using AJAX and update the progress.