When to 'inline' tasks and when to extract a separate - cakebuild

I am trying to figure out what criteria should be for a decision to 'inline' some work as a set of calls directly in a lets say Does clause (using aliases), or have a set of separate tasks with proper dependencies. It seems like it can be done in either way
For example
var target = Argument ("target", "build");
Task ("build")
.Does (() =>
{
NuGetRestore ("./source/solution.sln");
DotNetBuild ("./source/solution.sln", c => c.Configuration = "Release");
CopyFiles ("./**/*.dll", "./output/");
});
Task ("pack")
.IsDependentOn ("build")
.Does (() =>
{
NuGetPack ("./solution.nuspec");
});
RunTarget (target);
I could 'inline' all of this right into 'pack' task, and I could have a separate task for each of nuget restore, dotnetbuild and copy files actions

Unfortunately, the main answer to this is, it depends. It depends on your own preferences, and how you want to work.
Personally, I break Tasks into a concrete piece of functionality, or unit of work. So, in the above example, I would have a Task for:
NuGetRestore
DotNetBuild
CopyFiles
NuGetPack
The thought process here being that depending on what I wanted to do, I might want to only run one of those tasks, and I wouldn't want to do everything that was required. Breaking the Tasks into individual ones, gives me the option to piece these Tasks together as required.
If you put all the aliases into a single Task, you no longer have the option of doing that.

Best practice is to have one task per step in your build process, an example flow could be:
Clean
Restore
Build
Test
Pack
Publish
Then it'll be much more clear what takes time and what's the cause of any failure.
Cake will abort on any failure so the flow will be the same, but it'll give you more granular control and insight.
There's a simple example solution at github.com/cake-build/example
Convertng your script according to that example would look something like this:
var target = Argument("target", "Pack");
var configuration = Argument("configuration", "Release");
FilePath solution = File("./source/solution.sln");
Task("Clean")
.Does(() =>
{
CleanDirectories(new [] {
"./source/**/bin/" + configuration,
"./source/**/obj/" + configuration
});
});
Task("Restore")
.IsDependentOn("Clean")
.Does(() =>
{
NuGetRestore(solution);
});
Task("Build")
.IsDependentOn("Restore")
.Does(() =>
{
if(IsRunningOnWindows())
{
// Use MSBuild
MSBuild(solution, settings =>
settings.SetConfiguration(configuration));
}
else
{
// Use XBuild
XBuild(solution, settings =>
settings.SetConfiguration(configuration));
}
});
Task("Pack")
.IsDependentOn("Build")
.Does(() =>
{
NuGetPack("./solution.nuspec", new NuGetPackSettings {});
});
RunTarget(target);
Which will give you a nice step by step summary report like this
Task Duration
--------------------------------------------------
Clean 00:00:00.3885631
Restore 00:00:00.3742046
Build 00:00:00.3837149
Pack 00:00:00.3851542
--------------------------------------------------
Total: 00:00:01.5316368
If any step fails, it'll be much more clear which.

Related

How can I prevent a Gulp task with a dynamic import from being asynchronous?

I want to use gulp-imagemin to minify images. The relevant part of my gulpfile.js looks like this:
const gulp = require('gulp');
// a couple more require('')s
function minifyImages(cb) {
import('gulp-imagemin')
.then(module => {
const imagemin = module.default;
gulp.src('src/img/**/*')
.pipe(imagemin())
.pipe(gulp.dest('img'));
cb();
})
.catch(err => {
console.log(err);
cb();
});
}
function buildCSS(cb) { /* ... */ }
exports.build = gulp.series(buildCSS, minifyImages);
The reason I'm using a dynamic import here is because I think I have to - gulp-imagemin doesn't support the require('') syntax, and when I say import imagemin from 'gulp-imagemin I get an error saying "Cannot use import statement outside a module".
I would expect the build task to only finish after minifyImages has finished. After all, I'm calling cb() only at the very end, at a point where the promise should be resolved.
However, build seems to finish early, while minifyImages is still running. This is the output I get:
[21:54:47] Finished 'buildCSS' after 6.82 ms
[21:54:47] Starting 'minifyImages'...
[21:54:47] Finished 'minifyImages' after 273 ms
[21:54:47] Finished 'build' after 282 ms
<one minute later>
[21:55:49] gulp-imagemin: Minified 46 images (saved 5.91 MB - 22.8%)
How can I make sure the task doesn't finish early, and all tasks are run in sequence?
Let me know if there's something wrong with my assumptions; I'm somewhat new to gulp and importing.
Streams are always asynchronous, so if the cb() callback is called just after creating the gulp stream as in your then handler, it's only obvious that the stream by that time has not finished yet (in fact, it hasn't even started).
The simplest solution to call a callback when the gulp.dest stream has finished is using stream.pipeline, i.e.:
function minifyImages(cb) {
const { pipeline } = require('stream');
return import('gulp-imagemin')
.then(module => {
const imagemin = module.default;
pipeline(
gulp.src('src/img/**/*'),
imagemin(),
gulp.dest('img'),
cb
);
})
.catch(cb);
}
Or similarly, with an async function.
async function minifyImages(cb) {
const { pipeline } = require('stream');
const { default: imagemin } = await import('gulp-imagemin');
return pipeline(
gulp.src('src/img/**/*'),
imagemin(),
gulp.dest('img'),
cb
);
}
Another approach I have seen is to split the task in two sequential sub-tasks: the first sub-task imports the plugin module and stores it in a variable, and the second sub-task uses the plugin already loaded by the previous sub-task to create and return the gulp stream in the usual way.
Then the two sub-tasks can be combined with gulp.series.

Vertx CompositeFuture

I am working on a solution where I am using vertx 3.8.4 and vertx-mysql-client 3.9.0 for asynchronous database calls.
Here is the scenario that I have been trying to resolve, in a proper reactive manner.
I have some mastertable records which are in inactive state.
I run a query and get the list of records from the database.
This I did like this :
Future<List<Master>> locationMasters = getInactiveMasterTableRecords ();
locationMasters.onSuccess (locationMasterList -> {
if (locationMasterList.size () > 0) {
uploadTargetingDataForAllInactiveLocations(vertx, amazonS3Utility,
locationMasterList);
}
});
Now in uploadTargetingDataForAllInactiveLocations method, i have a list of items.
What I have to do is, I need to iterate over this list, for each item, I need to download a file from aws, parse the file and insert those data to db.
I understand the way to do it using CompositeFuture.
Can someone from vertx dev community help me with this or with some documentation available ?
I did not find good contents on this by googling.
I'm answering this as I was searching for something similar and I ended up spending some time before finding an answer and hopefully this might be useful to someone else in future.
I believe you want to use CompositeFuture in vertx only if you want to synchronize multiple actions. That means that you either want an action to execute in the case that either all your other actions on which your composite future is built upon succeed or at least one of the action on which your composite future is built upon succeed.
In the first case I would use CompositeFuture.all(List<Future> futures) and in the second case I would use CompositeFuture.any(List<Future> futures).
As per your question, below is a sample code where a list of item, for each item we run an asynchronous operation (namely downloadAnProcessFile()) which returns a Future and we want to execute an action doAction() in the case that all the async actions succeeded:
List<Future> futures = new ArrayList<>();
locationMasterList.forEach(elem -> {
Promise<Void> promise = Promise.promise();
futures.add(promise.future());
Future<Boolean> processStatus = downloadAndProcessFile(); // doesn't need to be boolean
processStatus.onComplete(asyncProcessStatus -> {
if (asyncProcessStatus.succeeded()){
// eventually do stuff with the result
promise.complete();
} else {
promise.fail("Error while processing file whatever");
}
});
});
CompositeFuture.all(futures).onComplete(compositeAsync -> {
if (compositeAsync.succeeded()){
doAction(); // <-- here do what you want to do when all future complete
} else {
// at least 1 future failed
}
});
This solution is probably not perfect and I suppose can be improved but this is what I found works for me. Hopefully will work for someone else.

How to pass GitHub action event hook from a javascript to a bash file?

I want to create a GitHub action that is simple and only run a bash-script file, see previous
question: How to execute a bash-script from a java script
With this javascript action, I want to pass values to the bash-script from the JSON payload given by GitHub.
Can this be done with something as simple as an exec command?
...
exec.exec(`export FILEPATH=${filepath}`)
...
I wanted to do something like this, but found there to be much more code needed that I originally expected. So while this is not simple, it does work and will block the action script while the bash script runs:
const core = require('#actions/core');
function run() {
try {
// This is just a thin wrapper around bash, and runs a file called "script.sh"
//
// TODO: Change this to run your script instead
//
const script = require('path').resolve(__dirname, 'script.sh');
var child = require('child_process').execFile(script);
child.stdout.on('data', (data) => {
console.log(data.toString());
});
child.on('close', (code) => {
console.log(`child process exited with code ${code}`);
process.exit(code);
});
}
catch (error) {
core.setFailed(error.message);
}
}
run()
Much of the complication is handling output and error conditions.
You can see my debugger-action repo for an example.

Cypress browser auto reloading in real time is not working

As the title says, I have to use the Cypress Test Runner to see the results every time I change my code. But as per the instructions, it should be auto-reloading and showing the results in real-time.
OS: Windows 10 Chrome: Version 78.0.3904.70
Here are two solutions, based on whether you have a build step, or not.
If you don't have a build step, then it's rather easy:
In your cypress/plugins/index.js:
Obtain file handles of spec files that are currently running, so that you can emit rerun event on them.
Set up a chokidar (or similar) watcher, listen on your file changes, and rerun the spec.
// (1) setup to obtain file handles
// ------------------------------------------------------
let openFiles = [];
module.exports = ( on ) => {
on('file:preprocessor', file => {
if (
/\.spec\.js/.test(file.filePath) &&
!openFiles.find(f => f.filePath === file.filePath)
) {
openFiles.push(file);
file.on('close', () => {
openFiles = openFiles.filter(f => f.filePath !== file.filePath);
});
}
// tells cypress to not compile the file. If you normally
// compile your spec or support files, then instead of this line,
// return whatever you do for compilation
return file.filePath;
});
on('before:browser:launch', () => {
openFiles = [];
});
};
// (2) watching and re-running logic
// ------------------------------------------------------
chokidar.watch([ /* paths/to/watch */ ])
.on( "change", () => rerunTests());
function rerunTests () {
// https://github.com/cypress-io/cypress/issues/3614
const file = openFiles[0];
if ( file ) file.emit('rerun');
}
If you have a build step, the workflow is more involved. I won't get into implementation details, but just to get an overview:
Set up an IPC channel when you start your build-step watcher, and upon file save & compilation, emit a did-compile event or similar.
The logic in cypress/plugins/index.js mostly remains the same as in previous solution, but instead of a chokidar watcher, you'll subscribe on the IPC server's did-compile event, and rerun the specs when the event is emitted.
For more info, refer to Preprocessors API, and cypress-io/cypress-watch-preprocessor.

How to register a VS Code task which invokes a function

I am developing a new extension which uses tasks. I need to create a task which will call a function, rather than starting a new process or shell.
I can create a new task which can execute a shell command.
let task = new vscode.Task(kind, taskName, taskSource, new vscode.ShellExecution(`echo Hello World`));
I would like to make a task which will call another method. Is there a way to do this?
There happens to be a "proposed API" for this exact purpose:
"custom execution" section in the March 2019 release notes with a code example:
let execution = new vscode.CustomExecution((terminalRenderer, cancellationToken, args): Thenable<number> => {
return new Promise<number>(resolve => {
// This is the custom task callback!
resolve(0);
});
});
const taskName = "First custom task";
let task = new vscode.Task2(kind, vscode.TaskScope.Workspace, taskName, taskType,
execution);
original issue: Allow extension to provide callback functions as tasks (#66818)
initial implementation pull request
relevant section in vscode.proposed.d.ts