As the title says, I have to use the Cypress Test Runner to see the results every time I change my code. But as per the instructions, it should be auto-reloading and showing the results in real-time.
OS: Windows 10 Chrome: Version 78.0.3904.70
Here are two solutions, based on whether you have a build step, or not.
If you don't have a build step, then it's rather easy:
In your cypress/plugins/index.js:
Obtain file handles of spec files that are currently running, so that you can emit rerun event on them.
Set up a chokidar (or similar) watcher, listen on your file changes, and rerun the spec.
// (1) setup to obtain file handles
// ------------------------------------------------------
let openFiles = [];
module.exports = ( on ) => {
on('file:preprocessor', file => {
if (
/\.spec\.js/.test(file.filePath) &&
!openFiles.find(f => f.filePath === file.filePath)
) {
openFiles.push(file);
file.on('close', () => {
openFiles = openFiles.filter(f => f.filePath !== file.filePath);
});
}
// tells cypress to not compile the file. If you normally
// compile your spec or support files, then instead of this line,
// return whatever you do for compilation
return file.filePath;
});
on('before:browser:launch', () => {
openFiles = [];
});
};
// (2) watching and re-running logic
// ------------------------------------------------------
chokidar.watch([ /* paths/to/watch */ ])
.on( "change", () => rerunTests());
function rerunTests () {
// https://github.com/cypress-io/cypress/issues/3614
const file = openFiles[0];
if ( file ) file.emit('rerun');
}
If you have a build step, the workflow is more involved. I won't get into implementation details, but just to get an overview:
Set up an IPC channel when you start your build-step watcher, and upon file save & compilation, emit a did-compile event or similar.
The logic in cypress/plugins/index.js mostly remains the same as in previous solution, but instead of a chokidar watcher, you'll subscribe on the IPC server's did-compile event, and rerun the specs when the event is emitted.
For more info, refer to Preprocessors API, and cypress-io/cypress-watch-preprocessor.
Related
I want to use gulp-imagemin to minify images. The relevant part of my gulpfile.js looks like this:
const gulp = require('gulp');
// a couple more require('')s
function minifyImages(cb) {
import('gulp-imagemin')
.then(module => {
const imagemin = module.default;
gulp.src('src/img/**/*')
.pipe(imagemin())
.pipe(gulp.dest('img'));
cb();
})
.catch(err => {
console.log(err);
cb();
});
}
function buildCSS(cb) { /* ... */ }
exports.build = gulp.series(buildCSS, minifyImages);
The reason I'm using a dynamic import here is because I think I have to - gulp-imagemin doesn't support the require('') syntax, and when I say import imagemin from 'gulp-imagemin I get an error saying "Cannot use import statement outside a module".
I would expect the build task to only finish after minifyImages has finished. After all, I'm calling cb() only at the very end, at a point where the promise should be resolved.
However, build seems to finish early, while minifyImages is still running. This is the output I get:
[21:54:47] Finished 'buildCSS' after 6.82 ms
[21:54:47] Starting 'minifyImages'...
[21:54:47] Finished 'minifyImages' after 273 ms
[21:54:47] Finished 'build' after 282 ms
<one minute later>
[21:55:49] gulp-imagemin: Minified 46 images (saved 5.91 MB - 22.8%)
How can I make sure the task doesn't finish early, and all tasks are run in sequence?
Let me know if there's something wrong with my assumptions; I'm somewhat new to gulp and importing.
Streams are always asynchronous, so if the cb() callback is called just after creating the gulp stream as in your then handler, it's only obvious that the stream by that time has not finished yet (in fact, it hasn't even started).
The simplest solution to call a callback when the gulp.dest stream has finished is using stream.pipeline, i.e.:
function minifyImages(cb) {
const { pipeline } = require('stream');
return import('gulp-imagemin')
.then(module => {
const imagemin = module.default;
pipeline(
gulp.src('src/img/**/*'),
imagemin(),
gulp.dest('img'),
cb
);
})
.catch(cb);
}
Or similarly, with an async function.
async function minifyImages(cb) {
const { pipeline } = require('stream');
const { default: imagemin } = await import('gulp-imagemin');
return pipeline(
gulp.src('src/img/**/*'),
imagemin(),
gulp.dest('img'),
cb
);
}
Another approach I have seen is to split the task in two sequential sub-tasks: the first sub-task imports the plugin module and stores it in a variable, and the second sub-task uses the plugin already loaded by the previous sub-task to create and return the gulp stream in the usual way.
Then the two sub-tasks can be combined with gulp.series.
I coded the next Node/Express/Mongo script:
const { MongoClient } = require("mongodb");
const stream = require("stream");
async function main() {
// CONECTING TO LOCALHOST (REPLICA SET)
const client = new MongoClient("mongodb://localhost:27018");
try{
// CONECTION
await client.connect();
// EXECUTING MY WATCHER
console.log("Watching ...");
await myWatcher(client, 15000);
} catch (e) {
// ERROR MANAGEMENT
console.log(`Error > ${e}`);
} finally {
// CLOSING CLIENT CONECTION ???
await client.close(); << ????
}
}main().catch(console.error);
// MY WATCHER. LISTENING CHANGES FROM MY DATABASE
async function myWatcher(client, timeInMs, pipeline = []) {
// TARGET TO WATCH
const watching = client.db("myDatabase").collection("myCollection").watch(pipeline);
// WATCHING CHANGES ON TARGET
watching.on("change", (next) => {
console.log(JSON.stringify(next));
console.log(`Doing my things...`);
});
// CLOSING THE WATCHER ???
closeChangeStream(timeInMs, watching); << ????
}
// CHANGE STREAM CLOSER
function closeChangeStream(timeInMs = 60000, watching) {
return new Promise((resolve) => {
setTimeout(() => {
console.log("Closing the change stream");
watching.close();
resolve();
}, timeInMs);
});
}
So, the goal is to keep always myWatcher function in an active state, to watch any database changes and for example, send an user notification when is detected some updating. The closeChangeStream function close myWatcher function in X seconds after any database changes. So, to keep the myWatcher always active, do you recomment not to use the closeChangeStream function ??
Another thing. With this goal in mind, to keep always myWatcher function in an active state, if I keep the await client.close();, my code emits an error: Topology is closed, so when I ignore this await client.close(), my code works perfectly. Do you recomment not to use the await client.close() function to keep always myWatcher function in an active state ??
Im a newbee in this topics !
thanks for the advice !
Thanks for help !
MongoDB change streams are implemented in a pub/sub paradigm.
Send your application to a friend in the Sudan. Have both you and your friend run the application (that has the change stream implemented). If you open up mongosh and run db.getCollection('myCollection').updateOne({_id: ObjectId("6220ee09197c13d24a7997b7")}, {FirstName: Bob}); both you and your friend will get the console.log for the change stream.
This is assuming you're not running localhost, but you can simulate this with two copies of the applications locally.
The issue comes from going into production and suddenly you have 200 load bearers, 5 developers, etc. running and your watch fires a ton of writes around the globe.
I believe, the practice is to functionize it. Wrap your watch in a function and fire the function when you're about to do a write (and close after you do your associated writes).
I want to create a GitHub action that is simple and only run a bash-script file, see previous
question: How to execute a bash-script from a java script
With this javascript action, I want to pass values to the bash-script from the JSON payload given by GitHub.
Can this be done with something as simple as an exec command?
...
exec.exec(`export FILEPATH=${filepath}`)
...
I wanted to do something like this, but found there to be much more code needed that I originally expected. So while this is not simple, it does work and will block the action script while the bash script runs:
const core = require('#actions/core');
function run() {
try {
// This is just a thin wrapper around bash, and runs a file called "script.sh"
//
// TODO: Change this to run your script instead
//
const script = require('path').resolve(__dirname, 'script.sh');
var child = require('child_process').execFile(script);
child.stdout.on('data', (data) => {
console.log(data.toString());
});
child.on('close', (code) => {
console.log(`child process exited with code ${code}`);
process.exit(code);
});
}
catch (error) {
core.setFailed(error.message);
}
}
run()
Much of the complication is handling output and error conditions.
You can see my debugger-action repo for an example.
I am working in the microsoft/azuredatastudio github repo, which is largely forked from vscode. I am trying to extend our command line processing to handle the window reuse parameter such that if we pass a server connection along with -r that we will open the specified connection. Our current command line processing service is loaded by src\vs\workbench\electron-browser\workbench.ts in Workbench.initServices.
Is there any platform-provided service that is visible to both electron-main and workbench\electron-browser that I could modify or leverage to be informed of the app being reused with new command line arguments?
I've found that the LaunchService defined in src\vs\code\electron-main\launch.ts appears to be responsible for capturing the arguments and opening or reusing the window, but it's not clear how I would marshal a notification from the LaunchService over to our services that are loaded by workbench.
2/12/2019 update:
It looks like I need to add an equivalent of this function in src\vs\code\electron-main\windows.ts
private doOpenFilesInExistingWindow(configuration: IOpenConfiguration, window: ICodeWindow, filesToOpen: IPath[], filesToCreate: IPath[], filesToDiff: IPath[], filesToWait: IPathsToWaitFor): ICodeWindow {
window.focus(); // make sure window has focus
window.ready().then(readyWindow => {
const termProgram = configuration.userEnv ? configuration.userEnv['TERM_PROGRAM'] : void 0;
readyWindow.send('vscode:openFiles', { filesToOpen, filesToCreate, filesToDiff, filesToWait, termProgram });
});
return window;
}
which has a new message like 'ads:openconnection' . Now to find out how to handle the message.
I ended up using ipcRenderer service and adding an ipc call to the launch service in main.
// {{SQL CARBON EDIT}}
// give the first used window a chance to process the other command line arguments
if (args['reuse-window'] && usedWindows.length > 0 && usedWindows[0])
{
let window = usedWindows[0];
usedWindows[0].ready().then(() => window.send('ads:processCommandLine', args));
}
// {{SQL CARBON EDIT}}
Say you have a file with:
AddReactImport();
And the plugin:
export default function ({types: t }) {
return {
visitor: {
CallExpression(p) {
if (p.node.callee.name === "AddReactImport") {
// add import if it's not there
}
}
}
};
}
How do you add import React from 'react'; at the top of the file/tree if it's not there already.
I think more important than the answer is how you find out how to do it. Please tell me because I'm having a hard time finding info sources on how to develop Babel plugins. My sources right now are: Plugin Handbook,Babel Types, AST Spec, this blog post, and the AST explorer. It feels like using an English-German dictionary to try to speak German.
export default function ({types: t }) {
return {
visitor: {
Program(path) {
const identifier = t.identifier('React');
const importDefaultSpecifier = t.importDefaultSpecifier(identifier);
const importDeclaration = t.importDeclaration([importDefaultSpecifier], t.stringLiteral('react'));
path.unshiftContainer('body', importDeclaration);
}
}
};
}
If you want to inject code, just use #babel/template to generate the AST node for it; then inject it as you need to.
Preamble: Babel documentation is not the best
I also agree that, even in 2020, information is sparse. I am getting most of my info by actually working through the babel source code, looking at all the tools (types, traverse, path, code-frame etc...), the helpers they use, existing plugins (e.g. istanbul to learn a bit about basic instrumentation in JS), the webpack babel-loader and more...
For example: unshiftContainer (and actually, babel-traverse in general) has no official documentation, but you can find it's source code here (fascinatingly enough, it accepts either a single node or an array of nodes!)
Strategy #1 (updated version)
In this particular case, I would:
Create a #babel/template
prepare that AST once at the start of my plugin
inject it into Program (i.e. the root path) once, only if the particular function call has been found
NOTE: Templates also support variables. Very useful if you want to wrap existing nodes or want to produce slight variations of the same code, depending on context.
Code (using Strategy #1)
import template from "#babel/template";
// template
const buildImport = template(`
import React from 'react';
`);
// plugin
const plugin = function () {
const importDeclaration = buildImport();
let imported = false;
let root;
return {
visitor: {
Program(path) {
root = path;
},
CallExpression(path) {
if (!imported && path.node.callee.name === "AddMyImport") {
// add import if it's not there
imported = true;
root.unshiftContainer('body', importDeclaration);
}
}
}
};
};
Strategy #2 (old version)
An alternative is:
use a utility function to generate an AST from source (parseSource)
prepare that AST once at the start of my plugin
inject it into Program (i.e. the root path) once, only if the particular function call has been found
Code (using Strategy #2)
Same as above but with your own compiler function (not as efficient as #babel/template):
/**
* Helper: Generate AST from source through `#babel/parser`.
* Copied from somewhere... I think it was `#babel/traverse`
* #param {*} source
*/
export function parseSource(source) {
let ast;
try {
source = `${source}`;
ast = parse(source);
} catch (err) {
const loc = err.loc;
if (loc) {
err.message +=
"\n" +
codeFrameColumns(source, {
start: {
line: loc.line,
column: loc.column + 1,
},
});
}
throw err;
}
const nodes = ast.program.body;
nodes.forEach(n => traverse.removeProperties(n));
return nodes;
}
Possible Pitfalls
When a new node is injected/replaced etc, babel will run all plugins on them again. This is why your first instrumentation plugin is likely to encounter an infinite loop right of the bet: you want to remember and not re-visit previously visited nodes (I'm using a Set for that).
It gets worse when wrapping nodes. Nodes wrapped (e.g. with #babel/template) are actually copies, not the original node. In that case, you want to remember that it is instrumented and skip it in case you come across it again, or, again: infinite loop 💥!
If you don't want to instrument nodes that have been emitted by any plugin (not just yours), that is you want to only operate on the original source code, you can skip them by checking whether they have a loc property (injected nodes usually do not have a loc property).
In your case, you are trying to add an import statement which won't always work without the right plugins enabled or without program-type set to module.
I believe there's an even better way now: babel-helper-module-imports
For you the code would be
import { addDefault } from "#babel/helper-module-imports";
addDefault(path, 'React', { nameHint: "React" })