How to register a VS Code task which invokes a function - visual-studio-code

I am developing a new extension which uses tasks. I need to create a task which will call a function, rather than starting a new process or shell.
I can create a new task which can execute a shell command.
let task = new vscode.Task(kind, taskName, taskSource, new vscode.ShellExecution(`echo Hello World`));
I would like to make a task which will call another method. Is there a way to do this?

There happens to be a "proposed API" for this exact purpose:
"custom execution" section in the March 2019 release notes with a code example:
let execution = new vscode.CustomExecution((terminalRenderer, cancellationToken, args): Thenable<number> => {
return new Promise<number>(resolve => {
// This is the custom task callback!
resolve(0);
});
});
const taskName = "First custom task";
let task = new vscode.Task2(kind, vscode.TaskScope.Workspace, taskName, taskType,
execution);
original issue: Allow extension to provide callback functions as tasks (#66818)
initial implementation pull request
relevant section in vscode.proposed.d.ts

Related

How to pass GitHub action event hook from a javascript to a bash file?

I want to create a GitHub action that is simple and only run a bash-script file, see previous
question: How to execute a bash-script from a java script
With this javascript action, I want to pass values to the bash-script from the JSON payload given by GitHub.
Can this be done with something as simple as an exec command?
...
exec.exec(`export FILEPATH=${filepath}`)
...
I wanted to do something like this, but found there to be much more code needed that I originally expected. So while this is not simple, it does work and will block the action script while the bash script runs:
const core = require('#actions/core');
function run() {
try {
// This is just a thin wrapper around bash, and runs a file called "script.sh"
//
// TODO: Change this to run your script instead
//
const script = require('path').resolve(__dirname, 'script.sh');
var child = require('child_process').execFile(script);
child.stdout.on('data', (data) => {
console.log(data.toString());
});
child.on('close', (code) => {
console.log(`child process exited with code ${code}`);
process.exit(code);
});
}
catch (error) {
core.setFailed(error.message);
}
}
run()
Much of the complication is handling output and error conditions.
You can see my debugger-action repo for an example.

When to 'inline' tasks and when to extract a separate

I am trying to figure out what criteria should be for a decision to 'inline' some work as a set of calls directly in a lets say Does clause (using aliases), or have a set of separate tasks with proper dependencies. It seems like it can be done in either way
For example
var target = Argument ("target", "build");
Task ("build")
.Does (() =>
{
NuGetRestore ("./source/solution.sln");
DotNetBuild ("./source/solution.sln", c => c.Configuration = "Release");
CopyFiles ("./**/*.dll", "./output/");
});
Task ("pack")
.IsDependentOn ("build")
.Does (() =>
{
NuGetPack ("./solution.nuspec");
});
RunTarget (target);
I could 'inline' all of this right into 'pack' task, and I could have a separate task for each of nuget restore, dotnetbuild and copy files actions
Unfortunately, the main answer to this is, it depends. It depends on your own preferences, and how you want to work.
Personally, I break Tasks into a concrete piece of functionality, or unit of work. So, in the above example, I would have a Task for:
NuGetRestore
DotNetBuild
CopyFiles
NuGetPack
The thought process here being that depending on what I wanted to do, I might want to only run one of those tasks, and I wouldn't want to do everything that was required. Breaking the Tasks into individual ones, gives me the option to piece these Tasks together as required.
If you put all the aliases into a single Task, you no longer have the option of doing that.
Best practice is to have one task per step in your build process, an example flow could be:
Clean
Restore
Build
Test
Pack
Publish
Then it'll be much more clear what takes time and what's the cause of any failure.
Cake will abort on any failure so the flow will be the same, but it'll give you more granular control and insight.
There's a simple example solution at github.com/cake-build/example
Convertng your script according to that example would look something like this:
var target = Argument("target", "Pack");
var configuration = Argument("configuration", "Release");
FilePath solution = File("./source/solution.sln");
Task("Clean")
.Does(() =>
{
CleanDirectories(new [] {
"./source/**/bin/" + configuration,
"./source/**/obj/" + configuration
});
});
Task("Restore")
.IsDependentOn("Clean")
.Does(() =>
{
NuGetRestore(solution);
});
Task("Build")
.IsDependentOn("Restore")
.Does(() =>
{
if(IsRunningOnWindows())
{
// Use MSBuild
MSBuild(solution, settings =>
settings.SetConfiguration(configuration));
}
else
{
// Use XBuild
XBuild(solution, settings =>
settings.SetConfiguration(configuration));
}
});
Task("Pack")
.IsDependentOn("Build")
.Does(() =>
{
NuGetPack("./solution.nuspec", new NuGetPackSettings {});
});
RunTarget(target);
Which will give you a nice step by step summary report like this
Task Duration
--------------------------------------------------
Clean 00:00:00.3885631
Restore 00:00:00.3742046
Build 00:00:00.3837149
Pack 00:00:00.3851542
--------------------------------------------------
Total: 00:00:01.5316368
If any step fails, it'll be much more clear which.

Celery send_task and retry on exception

I want to retry (official doc) a task when it raises an exception. Celery allows this by using the retry in form of self.retry(...)
Now, i can't figure out how to user self since i've a function without any class.
My code is this
.. imports ...
app = Celery('elasticcelery')
#app.task(name='rm_doc')
def rm_doc(schema_id, id):
es = Elasticsearch(es_ip)
try:
res = es.delete(schema_id, 'doc', id)
except NotFoundError as e:
<here goes the retry>
and it's called from another service in this way:
app_celery = Celery('celeryelastic')
app_celery.config_from_object('django.conf:settings')
app_celery.send_task('rm_doc', kwargs={"schema_id": schema_id, "id": document_id}, )
now, I should add the self.retry but there's no self in my method.
How should I proceed?
PS: I tried adding self as parmeter, but this fails since there's no mapping when the task is called the first time from the remote.
I forgot bind=True in the annotation of the method, now I can add self.

Windows Workflow not terminating after Transaction Failure

I am bit new to Windows Workflow foundation so it might be a very straight forward, but I am stuck with it. I've a very simple sequential workflow and there are couple of code activities that are inside a Transaction Scope Activity.
I am running my workflow from Console application having following code:
Activity workflow = new Process();
var inputArgument = new Dictionary<string, object>();
inputArgument["Argument 1"] = 1234567;
inputArgument["Argument 2"] = 1234567;
inputArgument["Argument 3"] = "GUID";
inputArgument["Aggument 4"] = #"\\filepath\";
var syncEvent = new AutoResetEvent(false);
var workflowApp = new WorkflowApplication(workflow, inputArgument);
workflowApp.OnUnhandledException =
delegate (WorkflowApplicationUnhandledExceptionEventArgs e)
{
return UnhandledExceptionAction.Terminate;
};
workflowApp.Completed +=
delegate (WorkflowApplicationCompletedEventArgs e)
{
syncEvent.Set();
};
workflowApp.Run();
syncEvent.WaitOne();
If I don't add Transaction Scope activity my workflow runs fine and in case of exception the workflow instance terminates and my console application close as well.
However, when I add Transaction Scope activity and if any activity fails inside Transaction Scope then my workflow instance keep running as well as my console. Can any one guide me how to terminate the instance?
I am not handling any exception within my workflow and want it to be like that so that I can log the exception details.
If you go to properties on the TransactionScope in the Workflow there is a property that is set to true by default called AbortInstanceOnTransactionFailure. Set that to false. It should then behave as you're expecting.
When this is enabled it will cause the workflow instance to abort but not terminate.

How to run lengthy tasks from an ASP.NET page?

I've got an ASP.NET page with a simple form. The user fills out the form with some details, uploads a document, and some processing of the file then needs to happens on the server side.
My question is - what's the best approach to handling the server side processing of the files? The processing involves calling an exe. Should I use seperate threads for this?
Ideally I want the user to submit the form without the web page just hanging there while the processing takes place.
I've tried this code but my task never runs on the server:
Action<object> action = (object obj) =>
{
// Create a .xdu file for this job
string xduFile = launcher.CreateSingleJobBatchFile(LanguagePair, SourceFileLocation);
// Launch the job
launcher.ProcessJob(xduFile);
};
Task job = new Task(action, "test");
job.Start();
Any suggestions are appreciated.
You could invoke the processing functionality asynchronously in a classic fire and forget fashion:
In .NET 4.0 you should do this using the new Task Parallel Library:
Task.Factory.StartNew(() =>
{
// Do work
});
If you need to pass an argument to the action delegate you could do it like this:
Action<object> task = args =>
{
// Do work with args
};
Task.Factory.StartNew(task, "SomeArgument");
In .NET 3.5 and earlier you would instead do it this way:
ThreadPool.QueueUserWorkItem(args =>
{
// Do work
});
Related resources:
Task Class
ThreadPool.QueueUserWorkItem Method
Use:
ThreadPool.QueueUserWorkItem(o => MyFunc(arg0, arg1, ...));
Where MyFunc() does the server-side processing in the background after the user submits the page;
I have a site that does some potentially long running stuff that needs to be responsive and update a timer for the user.
My solution was to build a state machine into the page with a hidden value and some session values.
I have these on my aspx side:
<asp:Timer ID="Timer1" runat="server" Interval="1600" />
<asp:HiddenField runat="server" ID="hdnASynchStatus" Value="" />
And my code looks something like:
Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load
PostbackStateEngineStep()
UpdateElapsedTime()
End Sub
Private Sub PostbackStateEngineStep()
If hdnASynchStatus.Value = "" And Not CBool(Session("WaitingForCallback")) Then
Dim i As IAsyncResult = {...run something that spawns in it's own thread, and calls ProcessCallBack when it's done...}
Session.Add("WaitingForCallback", True)
Session.Add("InvokeTime", DateTime.Now)
hdnASynchStatus.Value = "WaitingForCallback"
ElseIf CBool(Session("WaitingForCallback")) Then
If Not CBool(Session("ProcessComplete")) Then
hdnASynchStatus.Value = "WaitingForCallback"
Else
'ALL DONE HERE
'redirect to the next page now
response.redirect(...)
End If
Else
hdnASynchStatus.Value = "DoProcessing"
End If
End Sub
Public Sub ProcessCallBack(ByVal ar As IAsyncResult)
Session.Add("ProcessComplete", True)
End Sub
Private Sub UpdateElapsedTime()
'update a label with the elapsed time
End Sub