How to make a custom child command automatically retry - coffeescript

I'm trying to use the following dual command as shortcut to find dom elements.
Cypress.Commands.add "el", prevSubject: "optional", (subject, id) =>
if subject?
subject.find("[data-cy=#{id}]")
else
cy.get("[data-cy=#{id}]")
The problem is that the command doesn't retry if the element I'm looking for needs a moment to appear.
All the following approaches work
cy.wait(1000) # wait for element to appear
cy.get("parent").el("mark")
cy.get("parent").find("[data-cy=mark]") # or type out what the command does
cy.el("mark") # or use the command as parent command
but just cy.get("parent").el("mark") doesn't wait for the element to appear and fails.
I get the same problem if I define the command as child command like this
Cypress.Commands.add "el", prevSubject: true, (subject, id) =>
subject.find("[data-cy=#{id}]")
Is there a way to get my custom command to behave the same way as find does?

Do this instead
coffeescript:
cy.wrap(subject).find("[data-cy=#{id}]")
javascript:
cy.wrap(subject).find(`[data-cy=${id}]`)

This is quite surprising, but I was able to verify your results.
The simplest work around I came up with (a bit of a hack) is to re-get the subject within the custom command.
Cypress.Commands.add('el_with_ReGet', {prevSubject: true}, (subject, id) => {
const selector = subject.selector || subject.prevObject.selector;
return cy.get(selector).find(`[data-cy=${id}]`)
})
Another option is to use the 3rd party Cypress Pipe instead of a custom command.
cy.pipe can be used as a simpler replacement for Cypress Custom Commands - you just write functions.
cy.pipe works very similarly to cy.then except for a few key differences:
pipe will try to document the function name in the Command log (only works on named functions)
pipe will create DOM snapshots to aid in debugging
If the function passed to pipe resolves synchronously (doesn't contain Cypress commands)
AND returns a jQuery element, pipe will retry until the jQuery element list is not empty (most Cypress commands do this)
AND is followed by a cy.should, the function will be retried until the assertion passes or times out (most Cypress commands do this)
import 'cypress-pipe';
it('should find child by id by pipe (replacing custom command)', () => {
const elFn = (id) => (subject) => subject.find(`[data-cy=${id}]`)
cy.visit(...)
cy.get('parent')
.pipe(elFn('mark'))
.then(result => {
console.log('find result', result)
expect(result.length).to.eq(1)
})
})
There is a discussion here Cypress.Commands.add needs option to force retry on that command #2670 with an example from Gleb Bahmutov using cy.verifyUpcomingAssertions() but it looks quite complicated.
This pattern worked ok when the test (eventually) succeeded, but I was unable to get it to stop retrying when the test failed (it should timeout, but I can't figure out how).

Related

Stop huge error output from testing-library

I love testing-library, have used it a lot in a React project, and I'm trying to use it in an Angular project now - but I've always struggled with the enormous error output, including the HTML text of the render. Not only is this not usually helpful (I couldn't find an element, here's the HTML where it isn't); but it gets truncated, often before the interesting line if you're running in debug mode.
I simply added it as a library alongside the standard Angular Karma+Jasmine setup.
I'm sure you could say the components I'm testing are too large if the HTML output causes my console window to spool for ages, but I have a lot of integration tests in Protractor, and they are SO SLOW :(.
I would say the best solution would be to use the configure method and pass a custom function for getElementError which does what you want.
You can read about configuration here: https://testing-library.com/docs/dom-testing-library/api-configuration
An example of this might look like:
configure({
getElementError: (message: string, container) => {
const error = new Error(message);
error.name = 'TestingLibraryElementError';
error.stack = null;
return error;
},
});
You can then put this in any single test file or use Jest's setupFiles or setupFilesAfterEnv config options to have it run globally.
I am assuming you running jest with rtl in your project.
I personally wouldn't turn it off as it's there to help us, but everyone has a way so if you have your reasons, then fair enough.
1. If you want to disable errors for a specific test, you can mock the console.error.
it('disable error example', () => {
const errorObject = console.error; //store the state of the object
console.error = jest.fn(); // mock the object
// code
//assertion (expect)
console.error = errorObject; // assign it back so you can use it in the next test
});
2. If you want to silence it for all the test, you could use the jest --silent CLI option. Check the docs
The above might even disable the DOM printing that is done by rtl, I am not sure as I haven't tried this, but if you look at the docs I linked, it says
"Prevent tests from printing messages through the console."
Now you almost certainly have everything disabled except the DOM recommendations if the above doesn't work. On that case you might look into react-testing-library's source code and find out what is used for those print statements. Is it a console.log? is it a console.warn? When you got that, just mock it out like option 1 above.
UPDATE
After some digging, I found out that all testing-library DOM printing is built on prettyDOM();
While prettyDOM() can't be disabled you can limit the number of lines to 0, and that would just give you the error message and three dots ... below the message.
Here is an example printout, I messed around with:
TestingLibraryElementError: Unable to find an element with the text: Hello ther. This could be because the text is broken up by multiple elements. In this case, you can provide a function for your text matcher to make your matcher more flexible.
...
All you need to do is to pass in an environment variable before executing your test suite, so for example with an npm script it would look like:
DEBUG_PRINT_LIMIT=0 npm run test
Here is the doc
UPDATE 2:
As per the OP's FR on github this can also be achieved without injecting in a global variable to limit the PrettyDOM line output (in case if it's used elsewhere). The getElementError config option need to be changed:
dom-testing-library/src/config.js
// called when getBy* queries fail. (message, container) => Error
getElementError(message, container) {
const error = new Error(
[message, prettyDOM(container)].filter(Boolean).join('\n\n'),
)
error.name = 'TestingLibraryElementError'
return error
},
The callstack can also be removed
You can change how the message is built by setting the DOM testing library message building function with config. In my Angular project I added this to test.js:
configure({
getElementError: (message: string, container) => {
const error = new Error(message);
error.name = 'TestingLibraryElementError';
error.stack = null;
return error;
},
});
This was answered here: https://github.com/testing-library/dom-testing-library/issues/773 by https://github.com/wyze.

Test in mocha not completing if shareReplay operator of RxJS is used

I have a simple Javascript function that returns an observable to which I have applied the shareReplay operator with parameter 1.
[![export function doStuffWithShareReplay() {
return interval(100).pipe(
shareReplay(1),
tap(d => console.log('do stuff 1', d)),
take(5)
);
}
If I put such function within a mocha test and run it from within VSCode, it seems that the execution of the test never completes and I have to stop the test execution manually. More precisely, the test passes as expected, but the small control pad at the top-center of VScode is not closed and I have to click on the red button to close it, as you can see in the following picture. If I remove shareReplay the execution ends as expected. I am wondering which is the reason of the behavior.
Use publishReplay(1) and refCount() instead of shareReplay(1):
return interval(100).pipe(
publishReplay(1),
refCount(),
...
There's a bug in shareReplay(1) since RxJS 5.5 (that still exists in RxJS 6.1) that prevents it from unsubscribing from its source.
For more details see this issue: https://github.com/ReactiveX/rxjs/issues/3336

chai.assert() wont run methods in a test before the assertion (chai assert lib with protractor)

First time I post an issue on SO, I hope I'm doing it right.
it (' :: 2.0 service creation :: should fill out service info tab', function(){
createNewService.setServiceName(e2eConfig.newServiceDetails.basicServiceName);
createNewService.selectCategory();
createNewService.setIntroText(e2eConfig.newServiceDetails.introText);
createNewService.selectParent();
createNewService.uploadIcon();
createNewService.nextTab();
//right now assert will fire off without running the methods above because
//we are still on the infoTab
assert(($(createNewService.selectors.infoTab).isDisplayed()) == true, 'did not move to the next tab');
},20000);
What this test does is it fills the inputs, selects drop-downs where necessary and uploads a file.
The test then attempts to switch to the next tab in the widget.
To determine whether it managed to switch to the next tab I want to make a chai library assertion with a custom message.
with the current code the assert will return true because it sees the infoTab and the test will fail without running any of the methods before the assert
if I change the assert line to look for '!== true', then it's going to run the methods and move on
In any case, would it be better to do this in a different manner or perhaps use expect instead of assert?
Chai assert API
Chai expect API
All Protractor function calls return promises that resolve asynchronously, so if the functions you defined on createNewService are all calling Protractor functions you'll have to wait for them resolve before calling the assert. Try something like the following:
it (' :: 2.0 service creation :: should fill out service info tab', function(done) {
createNewService.setServiceName(e2eConfig.newServiceDetails.basicServiceName);
createNewService.selectCategory();
createNewService.setIntroText(e2eConfig.newServiceDetails.introText);
createNewService.selectParent();
createNewService.uploadIcon();
createNewService.nextTab().then(function() {
assert.eventually.strictEqual($(createNewService.selectors.infoTab).isDisplayed(), true, 'did not move to the next tab');
done();
});
},20000);
A few things to note:
This example assumes that createNewService.nextTab() returns a promise.
You'll need to use a library like chai-as-promised to handle assertions on the values returned from promises. In your code you're asserting that a promise object == true, which is truthy due to coercion.
Since your functions run asynchronously, you'll need to pass a callback to your anonymous function then call it when your test is finished. Information about testing asynchronous code can be found here.

Why don't all of the tests in WWW::Selenium appear to run?

I have a test file that includes the following code. The page opens up and the first three tests run but the next 3 don't run. They don't fail. They just don't run.
my $sel = Test::WWW::Selenium->new(
host => 'localhost',
port => 4444,
browser => 'firefox',
browser_url => $uri,
singlewindow => 1,
);
$sel->open_ok('/');
$sel->is_element_present_ok('username');
$sel->is_element_present_ok('password');
$sel->is_text_present('Username');
$sel->is_text_present('Password');
$sel->is_text_present('Login');
Then I log into the form.
$sel->type("name=username", $username);
$sel->type("name=password", $password);
$sel->submit('dom=document.forms["formfield"]');
$sel->is_text_present('Change Log');
I'm able to log in and the text 'Change Log' is present on the page but the test never runs. The test for text that doesn't exist doesn't run either.
$sel->is_text_present('Fee based gumbo');
Anyone know why these tests wouldn't be running?
I also have a test for a link
<a style="text-decoration:none;" href="/settings">Settings</a>
as
$sel->click('//a[contains(#href, "/settings")]');
That causes the test to crash. Is there a reason why that wouldn't be found?
EDIT:
I'd been using the WWW::Selenium page, not Test::WWW::Selenium.
This module is a WWW::Selenium subclass providing some methods useful for writing tests. For each Selenium command (open, click, type, …) there is a corresponding <command>_ok method that checks the return value (open_ok, click_ok, type_ok).
The xpath problem $sel->click('//a[contains(#href, "/settings")]'); simply just needed the double quotes removed.
I'd been using the WWW::Selenium page not the Test::WWW::Selenium at http://search.cpan.org/~lukec/Test-WWW-Selenium-1.32/lib/Test/WWW/Selenium.pm
This module is a WWW::Selenium subclass providing some methods useful for writing tests. For each Selenium command (open, click, type, ...) there is a corresponding _ok method that checks the return value (open_ok, click_ok, type_ok).
The xpath problem $sel->click('//a[contains(#href, "/settings")]');
simply just needed the double quotes removed.

install4j: how can i pass command line arguments to windows service

I've created a windows service using install4j and everything works but now I need to pass it command line arguments to the service. I know I can configure them at service creation time in the new service wizard but i was hoping to either pass the arguments to the register service command ie:
myservice.exe --install --arg arg1=val1 --arg arg1=val2 "My Service Name1"
or by putting them in the .vmoptions file like:
-Xmx256m
arg1=val1
arg2=val2
It seems like the only way to do this is to modify my code to pick up the service name via exe4j.launchName and then load some other file or environment variables that has the necessary configuration for that particular service. I've used other service creation tools for java in the past and they all had straightforward support for command line arguments registered by the user.
I know you asked this back in January, but did you ever figure this out?
I don't know where you're sourcing val1, val2 etc from. Are they entered by the user into fields in a form in the installation process? Assuming they are, then this is a similar problem to one I faced a while back.
My approach for this was to have a Configurable Form with the necessary fields (as Text Field objects), and obviously have variables assigned to the values of the text fields (under the 'User Input/Variable Name' category of the text field).
Later in the installation process I had a Display Progress screen with a Run Script action attached to it with some java to achieve what I wanted to do.
There are two 'gotchas' when optionally setting variables in install4j this way. Firstly, the variable HAS to be set no matter what, even if it's just to the empty string. So, if the user leaves a field blank (ie they don't want to pass that argument into the service), you'll still need to provide an empty string to the Run executable or Launch Service task (more in that in a moment) Secondly, arguments can't have spaces - every space-separated argument has to have its own line.
With that in mind, here's a Run script code snippet that might achieve what you want:
final String[] argumentNames = {"arg1", "arg2", "arg3"};
// For each argument this method creates two variables. For example for arg1 it creates
// arg1ArgumentIdentifierOptional and arg1ArgumentAssignmentOptional.
// If the value of the variable set from the previous form (in this case, arg1) is not empty, then it will
// set 'arg1ArgumentIdentifierOptional' to '--arg', and 'arg1ArgumentAssignmentOptional' to the string arg1=val1 (where val1
// was the value the user entered in the form for the variable).
// Otherwise, both arg1ArgumentIdentifierOptional and arg1ArgumentAssignmentOptional will be set to empty.
//
// This allows the installer to pass both parameters in a later Run executable task without worrying about if they're
// set or not.
for (String argumentName : argumentNames) {
String argumentValue = context.getVariable(argumentName)==null?null:context.getVariable(argumentName)+"";
boolean valueNonEmpty = (argumentValue != null && argumentValue.length() > 0);
context.setVariable(
argumentName + "ArgumentIdentifierOptional",
valueNonEmpty ? "--arg": ""
);
context.setVariable(
argumentName + "ArgumentAssignmentOptional",
valueNonEmpty ? argumentName+"="+argumentValue : ""
);
}
return true;
The final step is to launch the service or executable. I'm not too sure how services work, but with the executable, you create the task then edit the 'Arguments' field, giving it a line-separated list of values.
So in your case, it might look like this:
--install
${installer:arg1ArgumentIdentifierOptional}
${installer:arg1ArgumentAssignmentOptional}
${installer:arg2ArgumentIdentifierOptional}
${installer:arg2ArgumentAssignmentOptional}
${installer:arg3ArgumentIdentifierOptional}
${installer:arg3ArgumentAssignmentOptional}
"My Service Name1"
And that's it. If anyone else knows how to do this better feel free to improve on this method (this is for install4j 4.2.8, btw).