I'm building a plugin that inserts enterFunction() in front of every existing function call by calling path.insertBefore. So my code is transformed from:
myFunction();
To:
enterFunction();
myFunction();
The problem is that when I insert the node Babel once again traverses the inserted node. Here's the logging output:
'CallExpression', 'myFunction'
'CallExpression', 'enterFunction'
How can I prevent Babel from entering the enterFunction call expression and its children?
This is the code I'm currently using for my Babel plugin:
function(babel) {
return {
visitor: {
CallExpression: function(path) {
console.log("CallExpression", path.node.callee.name)
if (path.node.ignore) {
return;
}
path.node.ignore = true
var enterCall = babel.types.callExpression(
babel.types.identifier("enterFunction"), []
)
enterCall.ignore = true;
path.insertBefore(enterCall)
}
}
}
}
The Babel Handbook mentions the following section:
If your plugin needs to not run in a certain situation, the simpliest thing to do is to write an early return.
BinaryExpression(path) {
if (path.node.operator !== '**') return;
}
If you are doing a sub-traversal in a top level path, you can use 2 provided API methods:
path.skip() skips traversing the children of the current path. path.stop() stops traversal entirely.
outerPath.traverse({
Function(innerPath) {
innerPath.skip(); // if checking the children is irrelevant
},
ReferencedIdentifier(innerPath, state) {
state.iife = true;
innerPath.stop(); // if you want to save some state and then stop traversal, or deopt
}
});
In short, use path.skip() to skip traversing the children of the current path.
One application of this method is illustrated in this snippet using Visitors, CallExpression and skip():
export default function (babel) {
const { types: t } = babel;
return {
name: "ast-transform", // not required
visitor: {
CallExpression(path) {
path.replaceWith(t.blockStatement([
t.expressionStatement(t.yieldExpression(path.node))
]));
path.skip();
}
}
};
}
Related
I'm currently working on a simple game called Bingo. Now I've made a spectate option in which I need to broadcast the game not real time but with a 10 second delay. Now how can I do that easily ?
The idea to use observe routine seems good but there are at least a couple of ways this can be implemented. One way is to delay the subscription itself. Here's a working example:
import { Meteor } from 'meteor/meteor';
import { TheCollection } from '/imports/collections.js';
Meteor.publish('delayed', function (delay) {
let isStopped = false;
const handle = TheCollection.find({}).observeChanges({
added: (id, fields) => {
Meteor.setTimeout(() => {
if (!isStopped) {
this.added(TheCollection._name, id, fields);
}
}, delay);
},
changed: (id, fields) => {
Meteor.setTimeout(() => {
if (!isStopped) {
this.changed(TheCollection._name, id, fields);
}
}, delay);
},
removed: (id) => {
Meteor.setTimeout(() => {
if (!isStopped) {
this.removed(TheCollection._name, id);
}
}, delay);
}
});
this.onStop(() => {
isStopped = true;
handle.stop();
});
this.ready();
});
Another way would be to create a local ProxyCollection that is only used for rendering purpose. The data would be copied from TheCollection to ProxyCollection with some delay using the same "observe technique" as in the subscription case.
In both scenarios you will need to handle some edge cases, for example:
Should the data be delayed on the initial load?
Should the update be delayed if document is removed?
Should the update be delayed for the user that initialized the change?
They can all be solved by utilizing and adjusting the technique presented above. I believe though, they're outside the scope of this question.
EDIT
To prevent delays on the initial data load you can update the above code as follows:
let initializing = true;
const handle = TheCollection.find({}).observeChanges({
added: (id, fields) => {
if (initializing) {
this.added(TheCollection._name, id, fields);
} else {
Meteor.setTimeout(() => {
if (!isStopped) {
this.added(TheCollection._name, id, fields);
}
}, delay);
}
},
// ...
});
// ...
this.ready();
initializing = false;
At first, it may not be obvious why this works, but everything here is being executed within a fiber. The observeChanges routine "blocks" and it first calls added for each document of the entire initial dataset. Only then it proceeds to the next part of your publish method body.
Something that one should be aware of is because of the behavior described above, a subscription may be stopped before the initial data set is processed and so, before the onStop callback is even defined. In this particular case it shouldn't hurt but sometimes it can be problematic.
You can use .observe(). It will tell you when added/changed events fire and you can do whatever you want in those events. Documentation here.
CollectionName.find().observe({
added: function (document) {
//do something here, like delaying the update
},
changed: function (document) {
//do something here, like delaying the update
},
});
I've come across different types of syntax for Protractor's Page Objects and I was wondering, what's their background and which way is suggested.
This is the official PageObject syntax from Protractor's tutorial. I like it the most, because it's clear and readable:
use strict;
var AngularHomepage = function() {
var nameInput = element(by.model('yourName'));
var greeting = element(by.binding('yourName'));
this.get = function() {
browser.get('http://www.angularjs.org');
};
this.setName = function(name) {
nameInput.sendKeys(name);
};
this.getGreeting = function() {
return greeting.getText();
};
};
module.exports = AngularHomepage;
However, I've also found this kind:
'use strict';
var AngularPage = function () {
browser.get('http://www.angularjs.org');
};
AngularPage.prototype = Object.create({}, {
todoText: { get: function () { return element(by.model('todoText')); }},
addButton: { get: function () { return element(by.css('[value="add"]')); }},
yourName: { get: function () { return element(by.model('yourName')); }},
greeting: { get: function () { return element(by.binding('yourName')).getText(); }},
todoList: { get: function () { return element.all(by.repeater('todo in todos')); }},
typeName: { value: function (keys) { return this.yourName.sendKeys(keys); }} ,
todoAt: { value: function (idx) { return this.todoList.get(idx).getText(); }},
addTodo: { value: function (todo) {
this.todoText.sendKeys(todo);
this.addButton.click();
}}
});
module.exports = AngularPage;
What are the pros/cons of those two approaches (apart from readability)? Is the second one up-to-date? I've seen that WebdriverIO uses that format.
I've also heard from one guy on Gitter that the first entry is inefficient. Can someone explain to me why?
Page Object Model framework becomes popular mainly because of:
Less code duplicate
Easy to maintain for long
High readability
So, generally we develop test framework(pom) for our convenience based on testing scope and needs by following suitable framework(pom) patterns. There are NO such rules which says that, strictly we should follow any framework.
NOTE: Framework is, to make our task easy, result oriented and effective
In your case, 1st one looks good and easy. And it does not leads to confusion or conflict while in maintenance phase of it.
Example: 1st case-> element locator's declaration happens at top of each page. It would be easy to change in case any element locator changed in future.
Whereas in 2nd case, locators declared in block level(scatter across the page). It would be a time taking process to identify and change the locators if required in future.
So, Choose which one you feel comfortable based on above points.
I prefer to use ES6 class syntax (http://es6-features.org/#ClassDefinition). Here, i prepared some simple example how i work with page objects using ES6 classes and some helpful tricks.
var Page = require('../Page')
var Fragment = require('../Fragment')
class LoginPage extends Page {
constructor() {
super('/login');
this.emailField = $('input.email');
this.passwordField = $('input.password');
this.submitButton = $('button.login');
this.restorePasswordButton = $('button.restore');
}
login(username, password) {
this.email.sendKeys(username);
this.passwordField.sendKeys(password);
this.submit.click();
}
restorePassword(email) {
this.restorePasswordButton.click();
new RestorePasswordModalWindow().submitEmail(email);
}
}
class RestorePasswordModalWindow extends Fragment {
constructor() {
//Passing element that will be used as this.fragment;
super($('div.modal'));
}
submitEmail(email) {
//This how you can use methods from super class, just example - it is not perfect.
this.waitUntilAppear(2000, 'Popup should appear before manipulating');
//I love to use fragments, because they provides small and reusable parts of page.
this.fragment.$('input.email').sendKeys(email);
this.fragment.$('button.submit')click();
this.waitUntilDisappear(2000, 'Popup should disappear before manipulating');
}
}
module.exports = LoginPage;
// Page.js
class Page {
constructor(url){
//this will be part of page to add to base URL.
this.url = url;
}
open() {
//getting baseURL from params object in config.
browser.get(browser.params.baseURL + this.url);
return this; // this will allow chaining methods.
}
}
module.exports = Page;
// Fragment.js
class Fragment {
constructor(fragment) {
this.fragment = fragment;
}
//Example of some general methods for all fragments. Notice that default method parameters will work only in node.js 6.x
waitUntilAppear(timeout=5000, message) {
browser.wait(this.EC.visibilityOf(this.fragment), timeout, message);
}
waitUntilDisappear(timeout=5000, message) {
browser.wait(this.EC.invisibilityOf(this.fragment), timeout, message);
}
}
module.exports = Fragment;
// Then in your test:
let loginPage = new LoginPage().open(); //chaining in action - getting LoginPage instance in return.
loginPage.restorePassword('batman#gmail.com'); // all logic is hidden in Fragment object
loginPage.login('superman#gmail.com')
When user refresh a certain page, I want to set some initial values from the mongoDB database.
I tried using the onRendered method, which in the documentation states will run when the template that it is run on is inserted into the DOM. However, the database is not available at that instance?
When I try to access the database from the function:
Template.scienceMC.onRendered(function() {
var currentRad = radiationCollection.find().fetch()[0].rad;
}
I get the following error messages:
Exception from Tracker afterFlush function:
TypeError: Cannot read property 'rad' of undefined
However, when I run the line radiationCollection.find().fetch()[0].rad; in the console I can access the value?
How can I make sure that the copy of the mongoDB is available?
The best way for me was to use the waitOn function in the router. Thanks to #David Weldon for the tip.
Router.route('/templateName', {
waitOn: function () {
return Meteor.subscribe('collectionName');
},
action: function () {
// render all templates and regions for this route
this.render();
}
});
You need to setup a proper publication (it seems you did) and subscribe in the route parameters. If you want to make sure that you effectively have your data in the onRendered function, you need to add an extra step.
Here is an example of how to make it in your route definition:
this.templateController = RouteController.extend({
template: "YourTemplate",
action: function() {
if(this.isReady()) { this.render(); } else { this.render("yourTemplate"); this.render("loading");}
/*ACTION_FUNCTION*/
},
isReady: function() {
var subs = [
Meteor.subscribe("yoursubscription1"),
Meteor.subscribe("yoursubscription2")
];
var ready = true;
_.each(subs, function(sub) {
if(!sub.ready())
ready = false;
});
return ready;
},
data: function() {
return {
params: this.params || {}, //if you have params
yourData: radiationCollection.find()
};
}
});
In this example you get,in the onRendered function, your data both using this.data.yourData or radiationCollection.find()
EDIT: as #David Weldon stated in comment, you could also use an easier alternative: waitOn
I can't see your collection, so I can't guarantee that rad is a key in your collection, that said I believe your problem is that you collection isn't available yet. As #David Weldon says, you need to guard or wait on your subscription to be available (remember it has to load).
What I do in ironrouter is this:
data:function(){
var currentRad = radiationCollection.find().fetch()[0].rad;
if (typeof currentRad != 'undefined') {
// if typeof currentRad is not undefined
return currentRad;
}
}
I'm trying to return window.performance object from the web page back to casper's scope with the following code but I'm getting null. Can someone explain why?
performance = casper.evaluate ->
return window.performance
#echo performance
PhantomJS 1.x doesn't implement window.performance, so you can't use it.
PhantomJS 2.0.0 implements it, but it doesn't implement the window.performance.toJSON() function. The problem with PhantomJS is that you have to access this information through evaluate(), but it has the following limitation:
Note: The arguments and the return value to the evaluate function must be a simple primitive object. The rule of thumb: if it can be serialized via JSON, then it is fine.
Closures, functions, DOM nodes, etc. will not work!
You will have to find your own way of serializing this in the page context and passing it to the outside (JavaScript):
var performance = casper.evaluate(function(){
var t = window.performance.timing;
var n = window.performance.navigation;
return {
timing: {
connectStart: t.connectStart,
connectEnd: t.connectEnd,
...
},
navigation: {
type: n.type,
redirectCount: n.redirectCount
},
...
};
});
or look for a deep copy algorithm that produces a serializable object (from here):
var perf = casper.evaluate(function(){
function cloneObject(obj) {
var clone = {};
for(var i in obj) {
if(typeof(obj[i])=="object" && obj[i] != null)
clone[i] = cloneObject(obj[i]);
else
clone[i] = obj[i];
}
return clone;
}
return cloneObject(window.performance);
});
console.log(JSON.stringify(perf, undefined, 4));
I'm trying to use a WebWorker to parse certain files from a directory the user selected, like so:
function readEntries(reader, callback) {
reader.readEntries(function(results) {
results.forEach(function(entry) {
if (entry.isDirectory) { readEntries(entry.createReader(), callback); }
else { callback(entry); }
});
});
}
readEntries(initialDir.createReader(), function(entry) {
var nodeWorker = new Worker('worker.js');
nodeWorker.onmessage = function(e) { /* do stuff */ }
nodeWorker.postMessage(entry);
});
The initialDir is a DirectoryEntry from chrome.fileSystem.chooseEntry({'type': 'openDirectory'}, ....
The corresponding worker.js would use FileReaderSync to parse the passed file(s) and pass analysis results back to /* do stuff */.
If I execute this, I get the following DOMException:
Error: Failed to execute 'postMessage' on 'Worker': An object could not be cloned.
Since entry.toURL() returns an empty string, how am I supposed to pass local, unsandboxed files to the web worker? Is there any other way to achieve what I want?