MonoDevelop: macros or running commands sequentially - macros

I'm trying to create an add-in for MonoDevelop, which will run commands, triggered from external tools (ex: updating source, building and running project on incoming message from Jabber). Since I could not find macros, I use "commands", by calling them through IdeApp.CommandService.DispatchCommand(). For single action this works great, but when I try to run several commands sequentially, they are executed simultaneously.
So, how to implement the command queue, where one command waits completion of previous?

DispatchCommand is synchronous, however some of the commands that it runs may start asynchronous operations, and the commands have no way to return a handle to those operations.
For those particular commands, I'd recommend that you don't dispatch them as commands but instead directly call the high-level APIs to perform those operations. For example, IdeApp.ProjectOperations.Build returns an IAsyncOperation handle that you can block on using its WaitForCompleted method. You can use IdeApp.Workspace to open projects and get handles to opened projects, set the active configuration, etc.

Related

Perl script running a periodic (main) task and providing a REST interface

I am working on a Perl script which does some periodic processing based on file-system contents.
The overall structure is like this:
# ... initialization...
while(1) {
# ... scan filesystem, perform actions depending on changes detected ...
sleep 5;
}
I would like to add the ability to input some data into this process by means of exposing an interface through HTTP. E.g. I would like to add an endpoint to skip the sleep, but also some means to input data that is processed in the next iteration. Additionally, I would like to be able to query some of the program's status through HTTP (i.e. a simple fork() to run the webserver-part in a separate process is insufficient?)
So far I have already used the Dancer2 framework once but it has a start; call that blocks and thus does not allow any other tasks (like my loop) to run. Additionally, I could of course move the code which is currently inside the loop to an endpoint exposed through Dancer2 but then I would need to call that periodically (though an external program?) which seems to be quite an obscure indirection compared to just having the webserver-part running in background.
Is it possible to unobtrusively (i.e. without blocking the program) add a REST-server capability to a Perl script? If yes: Which modules would be used for the purpose? If no: Should I really implement an external process to periodically invoke a certain endpoint or pursue a different solution altogether?
(I have tried to add a dancer2 tag, but could not do so due to insufficient reputation. Do not be mislead by this: I have so far only tried with Dancer2 not the Dancer (v.1))
You could try to launch your processing loop in a background thread, before you run start;.
See man perlthrtut
You probably want use threads::shared; to declare some variables shared between the REST part and the background thread. Or use dedicated queues/event mechanisms.

Recursive Workflow in Powershell

I'm trying to automate a lengthy process that can be broken down into several steps. (say Steps 1-5)
I have written a script that separates these into functions and call them sequentially.
However, we now have the additional requirement of making the script restartable. That is, if it fails in any one of the steps, rerunning the script would cause it to skip all completed steps and retry from the failed one.
Is this at all possible without referencing an external log file?
I've tried using workflows but it seems like recursion isn't supported.
Any ideas?
Some options aside from using a log file.
Use the registry
you can set a registry value to a number depending on what step you stopped on, this removes the need for a log file but is somewhat similar in terms of 'external' storage
Check the task status on each run
depending on the tasks you could have the script 'test', for example, step 3 to see if it has already been completed, then check step 4, 5 etc. until it encounters one it needs to run and continue from there, this may be impossible or require a lot of overhead code though for not much payoff.
Allow the user to continue from within the script.
this is probably the best way of doing it (aside from just using a log file), run the script in blocks, and when an error is encountered you can prompt the user to fix the issue before pressing 'enter' to re-run the previous script block, this makes it easy to provide information about what failed as well.
the main thing here is that once a script 'quits', in order to know what happened in it's last run it needs an external source of information, or to handle it in another way.

protractor: what is the relationship between the control flow and javascript event loop?

I'm having a difficult time trying to understand how the control flow in protractor work in relation to how JS event loop works. Here is what I know so far:
Protractor control flow stores commands that return promises in a queue. The first command will be at the front of the queue and the last command will be at the back. No command will be executed until the command in front of it has its promise resolved.
JS event loop stores asynchronous task (callbacks to be specific). Callbacks are not executed until all functions in the stack have completed and the stack is empty. Before running each callback, there is a check on whether the stack is empty or not.
so lets take this code for example. The code is basically clicking a search button and a api request is made. Then after data is returned, it checks whether the field that stores the returned data exists.
elem('#searchButton').click(); //will execute a api call to retrieve data
browser.wait(ExpectedConditions.presenceOf(elem('#resultDataField'),3000));
expect(elem('#resultDataField').isPresent()).toBeTruthy();
So with this code, I'm able to get it to work. But I don't know how it does it. How is the event loop applied in this scenario?
The core of the ControlFlow implementation is in runEventLoop_ (in Selenium's promise.js implementation).
As I understand it, the ControlFlow registers a call to runEventLoop_ with the JS event loop (e.g., with a 0-second timeout or somesuch). The call to runEventLoop_ can be thought of as a single iteration of a normal event loop. It registers code to actually run a scheduled task (i.e., actually do the work you queued up during your it). Once that task completes or fails (e.g., by hooking its async promise callbacks) the next iteration of runEventLoop_ is scheduled (see the calls to scheduleEventLoop in runEventLoop_).
There is some complexity when a callback ends up registering new promises (those need to be "inserted" before the old next event, this is accomplished by creating a "nested" control flow. Mostly you should never have to know this.)

Is it possible to force ipengines to completely reset all local variables and imports?

My workflow is: start ipcontroller/ipengines, then run 'python test_script.py' several times with different parameters. This script includes a map_async call. The ipengines don't recognize changes to the code between calls to the script, and static class variables are not reset to their defaults. It seems like a magic %reset call would do the trick, but attempting to execute this command on the ipengines does not seem to do anything.
My solution to this was to use the ipengine to start a new subprocess which completes the desired operations. This subprocess has its own memory. Not ideal, but provides the desired functionality.

Running perl function in background or separate process

I'm running in circles. I have webpage that creates a huge file. This file takes forever to be created and is in a subroutine.
What is the best way for my page to run this subroutine but not wait for it to be created/processed? Are there any issues with apache processes since I'm doing this from a webpage?
The simplest way to perform this task is to simply use fork() and have the long-running subroutine run in the child process. Meanwhile, have the parent return to Apache. You indicate that you've tried this already, but absent more information on exactly what the code looks like and what is failing it's hard to help you move forward on this path.
Another option is to have run a separate process that is responsible for managing the long-running task. Have the webpage send a unit of work to the long-running process using a local socket (or by creating a file with the necessary input data), and then your web script can return immediately while the separate process takes care of completing the long running task.
This method of decoupling the execution is fairly common and is often called a "task queue" (if there is some mechanism in place for queuing requests as they come in). There are a number of tools out there that will help you design this sort of solution (but for simple cases with filesystem-based communication you may be fine without them).
I think you want to create a worker grandchild of Apache -- that is:
Apache -> child -> grandchild
where the child dies right after forking the grandchild, and the grandchild closes STDIN, STDOUT, and STDERR. (The grandchild then creates the file.) These are the basic steps in creating a zombie daemon (a parent-less worker process unconnected with the webserver).