Stop huge error output from testing-library - react-testing-library

I love testing-library, have used it a lot in a React project, and I'm trying to use it in an Angular project now - but I've always struggled with the enormous error output, including the HTML text of the render. Not only is this not usually helpful (I couldn't find an element, here's the HTML where it isn't); but it gets truncated, often before the interesting line if you're running in debug mode.
I simply added it as a library alongside the standard Angular Karma+Jasmine setup.
I'm sure you could say the components I'm testing are too large if the HTML output causes my console window to spool for ages, but I have a lot of integration tests in Protractor, and they are SO SLOW :(.

I would say the best solution would be to use the configure method and pass a custom function for getElementError which does what you want.
You can read about configuration here: https://testing-library.com/docs/dom-testing-library/api-configuration
An example of this might look like:
configure({
getElementError: (message: string, container) => {
const error = new Error(message);
error.name = 'TestingLibraryElementError';
error.stack = null;
return error;
},
});
You can then put this in any single test file or use Jest's setupFiles or setupFilesAfterEnv config options to have it run globally.

I am assuming you running jest with rtl in your project.
I personally wouldn't turn it off as it's there to help us, but everyone has a way so if you have your reasons, then fair enough.
1. If you want to disable errors for a specific test, you can mock the console.error.
it('disable error example', () => {
const errorObject = console.error; //store the state of the object
console.error = jest.fn(); // mock the object
// code
//assertion (expect)
console.error = errorObject; // assign it back so you can use it in the next test
});
2. If you want to silence it for all the test, you could use the jest --silent CLI option. Check the docs
The above might even disable the DOM printing that is done by rtl, I am not sure as I haven't tried this, but if you look at the docs I linked, it says
"Prevent tests from printing messages through the console."
Now you almost certainly have everything disabled except the DOM recommendations if the above doesn't work. On that case you might look into react-testing-library's source code and find out what is used for those print statements. Is it a console.log? is it a console.warn? When you got that, just mock it out like option 1 above.
UPDATE
After some digging, I found out that all testing-library DOM printing is built on prettyDOM();
While prettyDOM() can't be disabled you can limit the number of lines to 0, and that would just give you the error message and three dots ... below the message.
Here is an example printout, I messed around with:
TestingLibraryElementError: Unable to find an element with the text: Hello ther. This could be because the text is broken up by multiple elements. In this case, you can provide a function for your text matcher to make your matcher more flexible.
...
All you need to do is to pass in an environment variable before executing your test suite, so for example with an npm script it would look like:
DEBUG_PRINT_LIMIT=0 npm run test
Here is the doc
UPDATE 2:
As per the OP's FR on github this can also be achieved without injecting in a global variable to limit the PrettyDOM line output (in case if it's used elsewhere). The getElementError config option need to be changed:
dom-testing-library/src/config.js
// called when getBy* queries fail. (message, container) => Error
getElementError(message, container) {
const error = new Error(
[message, prettyDOM(container)].filter(Boolean).join('\n\n'),
)
error.name = 'TestingLibraryElementError'
return error
},
The callstack can also be removed

You can change how the message is built by setting the DOM testing library message building function with config. In my Angular project I added this to test.js:
configure({
getElementError: (message: string, container) => {
const error = new Error(message);
error.name = 'TestingLibraryElementError';
error.stack = null;
return error;
},
});
This was answered here: https://github.com/testing-library/dom-testing-library/issues/773 by https://github.com/wyze.

Related

VsCode Extension custom CompletionItem disables built-in Intellisense

I am working on a VsCode extension in that I want to provide custom snippets for code completion.
I know about the option of using snippet json files directly, however those have the limitation of not being able to utilize the CompletionItemKind property that determines the icon next to the completion suggestion in the pop-up.
My issue:
If I implement a simple CompletionItemProvider like this:
context.subscriptions.push(
vscode.languages.registerCompletionItemProvider(
{scheme:"file",language:"MyLang"},
{
provideCompletionItems(document: vscode.TextDocument, position: vscode.Position) {
let item = new vscode.CompletionItem('test');
item.documentation = 'my test function';
item.kind = vscode.CompletionItemKind.Function;
return [item];
}
}
)
)
then the original VsCode IntelliSense text suggestions are not shown anymore, only my own. Should I just return a kind of an empty response, like
provideCompletionItems(document: vscode.TextDocument, position: vscode.Position) {
return [null|[]|undefined];
}
the suggestions appear again as they should. It seems to me that instead of merging the results of the built-in IntelliSense and my own provider, the built-in ones get simply overridden.
Question:
How can I keep the built-in IntelliSense suggestions while applying my own CompletionItems?
VsCode Version: v1.68.1 Ubuntu
I seem to have found the answer for my problem, so I will answer my question.
Multiple providers can be registered for a language. In that case providers are sorted
by their {#link languages.match score} and groups of equal score are sequentially asked for
completion items. The process stops when one or many providers of a group return a
result.
My provider seems to provide results that are just higher scored than those of IntelliSense.
Since I didn't provide any trigger characters, my CompletionItems were comteping directly with the words found by the built-in system by every single pressed key and won.My solution is to simply parse and register the words in my TextDocument myself and extend my provider results by them. I could probably just as well create and register a new CompletionItemProvider for them if I wanted to, however I decided to have a different structure for my project.

Protractor Custom Locator: Not available in production, but working absolutely fine on localhost

I have added a custom locator in protractor, below is the code
const customLocaterFunc = function (locater: string, parentElement?: Element, rootSelector?: any) {
var using = parentElement || (rootSelector && document.querySelector(rootSelector)) || document;
return using.querySelector("[custom-locater='" + locater + "']");
}
by.addLocator('customLocater', customLocaterFunc);
And then, I have configured it inside protractor.conf.js file, in onPrepare method like this:
...
onPrepare() {
require('./path-to-above-file/');
...
}
...
When I run my tests on the localhost, using browser.get('http://localhost:4200/login'), the custom locator function works absolutely fine. But when I use browser.get('http://11.15.10.111/login'), the same code fails to locate the element.
Please note, that the test runs, the browser gets open, user input gets provided, the user gets logged-in successfully as well, but the element which is referred via this custom locator is not found.
FYI, 11.15.10.111 is the remote machine (a virtual machine) where the application is deployed. So, in short the custom locator works as expected on localhost, but fails on production.
Not an answer, but something you'll want to consider.
I remember adding this custom locator, and encounter some problems with it and realised it's just an attribute name... nothing fancy, so I thought it's actually much faster to write
let elem = $('[custom-locator="locator"]')
which is equivalent to
let elem = element(by.css('[custom-locator="locator"]'))
than
let elem = element(by.customLocator('locator'))
And I gave up on this idea. So maybe you'll want to go this way too
I was able to find a solution to this problem, I used data- prefix for the custom attribute in the HTML. Using which I can find that custom attribute on the production build as well.
This is an HTML5 principle to prepend data- for any custom attribute.
Apart from this, another mistake that I was doing, is with the selector's name. In my code, the selector name is in camelCase (loginBtn), but in the production build, it was replaced with loginbtn (all small case), that's why my custom locater was not able to find it on the production build.

Debugging test cases when they are combination of Robot framework and python selenium

Currently I'm using Eclipse with Nokia/Red plugin which allows me to write robot framework test suites. Support is Python 3.6 and Selenium for it.
My project is called "Automation" and Test suites are in .robot files.
Test suites have test cases which are called "Keywords".
Test Cases
Create New Vehicle
Create new vehicle with next ${registrationno} and ${description}
Navigate to data section
Those "Keywords" are imported from python library and look like:
#keyword("Create new vehicle with next ${registrationno} and ${description}")
def create_new_vehicle_Simple(self,registrationno, description):
headerPage = HeaderPage(TestCaseKeywords.driver)
sideBarPage = headerPage.selectDaten()
basicVehicleCreation = sideBarPage.createNewVehicle()
basicVehicleCreation.setKennzeichen(registrationno)
basicVehicleCreation.setBeschreibung(description)
TestCaseKeywords.carnumber = basicVehicleCreation.save()
The problem is that when I run test cases, in log I only get result of this whole python function, pass or failed. I can't see at which step it failed- is it at first or second step of this function.
Is there any plugin or other solution for this case to be able to see which exact python function pass or fail? (of course, workaround is to use in TC for every function a keyword but that is not what I prefer)
If you need to "step into" a python defined keyword you need to use python debugger together with RED.
This can be done with any python debugger,if you like to have everything in one application, PyDev can be used with RED.
Follow below help document, if you will face any problems leave a comment here.
RED Debug with PyDev
If you are wanting to know which statement in the python-based keyword failed, you simply need to have it throw an appropriate error. Robot won't do this for you, however. From a reporting standpoint, a python based keyword is a black box. You will have to explicitly add logging messages, and return useful errors.
For example, the call to sideBarPage.createNewVehicle() should throw an exception such as "unable to create new vehicle". Likewise, the call to basicVehicleCreation.setKennzeichen(registrationno) should raise an error like "failed to register the vehicle".
If you don't have control over those methods, you can do the error handling from within your keyword:
#keyword("Create new vehicle with next ${registrationno} and ${description}")
def create_new_vehicle_Simple(self,registrationno, description):
headerPage = HeaderPage(TestCaseKeywords.driver)
sideBarPage = headerPage.selectDaten()
try:
basicVehicleCreation = sideBarPage.createNewVehicle()
except:
raise Exception("unable to create new vehicle")
try:
basicVehicleCreation.setKennzeichen(registrationno)
except:
raise exception("unable to register new vehicle")
...

Protractor - getText as input to sendKeys and other actions

I am writing a protractor test, where I need to read a span/div with id='mylabel' using getText(). I then need to pass the value to an input (id='myinput') using sendKeys().
So, I do this:
var value;
element(by.id('mylabel')).getText().then(function(txt){
value = txt;
element(by.id('myinput')).sendKeys(value);
// do "other protractor tasks" with 'value'.
})
But, is there a way I can avoid the nesting, by asking protractor to perform sendKeys and subsequent actions only after the value variable is set?
The above is a simple case, but I soon find code getting into multiple nesting because of the waiting for promises to be resolved. Also, I observed that protractor does not provide a stacktrace if "other protractor tasks" throws an error due to an error somewhere down the line (it just hangs and times out).
I am using Protractor 2.1.0 and I am working with Angular JS pages.
I am specifically interested to know if it is a known issue to have silent errors in nested tasks with Protractor and is there anyway to solve it?
Protractor handle at least one level of promises without the need of then function. That way you can expect synchronous flow.
If you are looking for event based action like watching a value to update then you can setup something like this:
function waitForTextToUpdate(elm, defaultText, timeout) {
if (typeof(timeout) === 'undefined') {
timeout = 10000;
}
return browser.driver.wait(function() {
return elm.getText().then(function(value) {
return !(value.indexOf(defaultText) > -1);
});
}, timeout, "Expectation error (waitForTextToUpdate): Timed out waiting for element state to change.");
}
Promises are inevitable in protractor. There is no way to avoid handling promises, but if you want to avoid nesting it can be done easily using .then() chaining functionality. Here's an example -
var value = '';
element(by.id('mylabel')).getText()
.then(function(txt){
value = txt;
})
.then(function(){
element(by.id('myinput')).sendKeys(value);
// do "other protractor tasks" with 'value'.
});
There's also an npm package available for this feature. Q npm package. It works similar to the above example but is more extended.
Hope this helps.

How do I customize wintersmith paginator?

I've been setting up a site with Wintersmith and am loving it for the most part, but I cannot wrap my head around some of the under-the-hood mechanics. I started with the "blog" skeleton that adds the paginator.coffee plugin.
The question requires some details, so up top, what I'm trying to accomplish:
Any files (markdown, html, json metadata) will be picked up either in /contents/article/<file> or /contents/articles/<subdir>/<file>
Output files are at /articles/YYYY/MM/DD/title-slug/
/blog.html lists all articles, paginated.
Files just under /contents (not in articles) are not treated as blog posts. Markdown and JSON metadata are still processed, but no permalinked URLs, not included in blog listings, file/directory structure is more directly copied over.
So, I solved #1 with this suggestion: How can I have articles in Wintersmith not in their own subdirectory? So far, great, and #3 is working -- the paginated listing includes all posts. #4 has not been an issue, it's the default behavior.
On #2 I found this solution: http://andrewphilipclark.com/2013/11/08/removing-the-boilerplate-from-wintersmith-blog-posts/ . As the author mentions, his solution was (sort of) subsequently incorporated into Wintersmith master, so I tried just setting the filenameTemplate accordingly. Unfortunately this applies to all content, not just that under /articles, so the rest of my site gets hosed (breaks #4). So then I tried the author's approach, adding a blogpost.coffee plugin using his code. This generates all the files out of /contents/articles into the correct permalink URLs, however the paginator now for some reason will no longer see files directly under /articles (point #1).
I've tried a lot of permutations and hacking. Tried changing the order of which plugin gets loaded first. Tried having PaginatorPage extend BlogpostPage instead of Page. Tried a lot of things. I finally realize, even after inspecting many of the core classes in Wintersmith source, that I do not understand what is happening.
Specifically, I cannot figure out how contents['articles']._.pages and .directories are set, which seems relevant. Nor do I understand what that underscore is.
Ultimately, Jade/CoffeeScript/Markdown are a great combo for minimizing coding and enhancing clarity except when you want to understand what's happening under the hood and you don't know these languages. It took me a bit to get the basics of Jade and CoffeeScript (Markdown is trivial of course) enough to follow what's happening. When I've had to dig into the wintersmith source, it gets deeper. I confess I'm also a node.js newbie, but I think the big issue here is just a magic framework. It would be helpful, for instance, if some of the core "plugins" were included in the skeleton site as opposed to buried in node_modules, just so curious hackers could see more quickly how things interconnect. More verbose docs would of course be helpful too. It's one thing to understand conceptually content trees, generators, views, templates, etc., but understanding the code flow and relations at runtime? I'm lost.
Any help is appreciated. As I said, I'm loving Wintersmith, just wish I could dispel magic.
Because coffee script is rubbish, this is extremely hard to do. However, if you want to, you can destroy the paginator.coffee and replace it with a simple javascript script that does a similar thing:
module.exports = function (env, callback) {
function Page() {
var rtn = new env.plugins.Page();
rtn.getFilename = function() {
return 'index.html';
},
rtn.getView = function() {
return function(env, locals, contents, templates, callback) {
var error = null;
var context = {};
env.utils.extend(context, locals);
var buffer = new Buffer(templates['index.jade'].fn(context));
callback(error, buffer);
};
};
return rtn;
};
/** Generates a custom index page */
function gen(contents, callback) {
var p = Page();
var pages = {'index.page': p};
var error = null;
callback(error, pages);
};
env.registerGenerator('magic', gen);
callback();
};
Notice that due to 'coffee script magic', there are a number of hoops to jump through here, such as making sure you return a buffer from getView(), and 'manually' overriding rather than using the obscure coffee script extension semantics.
Wintersmith is extremely picky about how it handles these functions. If callbacks are not invoked, for the returned value is not a Stream or Buffer, generated files will appear in the content summary, but not be rendered to disk during a build. Enable verbose logging and check of 'skipping foo' messages to detect this.