I integrated CucumberJS with Protractor to write E2E tests for an Angular (not AngularJS) application.
Is there any easy way (maybe an already existing package) to take screenshots after each step (Given, When, Then) and compare them with some reference images? If reference images are not present, then to register the screenshot as a reference image.
The step should fail if the images are too different.
Before asking this question I read CucumberJS: Take screenshot after each step, but that question is about taking a screenshot, not comparing.
unfortunately the npm modules that claim to do this seem abandoned (e.g. https://www.npmjs.com/package/protractor-image-comparison/v/1.7.0)
https://github.com/SAP/ui5-uiveri5 has a similar image comparison which could serve as an example (see docs/usage/visual testing). basically you need a custom jasmine matcher an an image comparison module such as resemblejs
Related
I'm interested in using Hot Module Replacement with a newly created React app.
Facebook Incubator's create-react-app uses Webpack 2 which can be configured to support HMR, however in order to do so, one needs to "eject" the create-react-app project.
As the documentation points out, this is a "one way" operation and cannot be reversed.
If I'm to do this, I want to know what I might be giving up. I've been unable to locate any documentation that explains the potential drawbacks of ejecting.
The current configuration allows your project to get updates from create-react-app core team. Once you eject you no longer get this.
It's kind of like pulling in bootstrap css via CDN as opposed to downloading the source code and injecting it directly into your project.
If you want more control over your webpack, there are ways to configure/customize it without ejecting:
https://www.npmjs.com/package/custom-react-scripts
I have written a few tests for polymer elements in jasmine based on how Polymer wrote tests with Mocha for their components.I am able to run those tests successfully if I run them individually.
By taking a look at Polymer's core tests ,what I understand is that there is a custom test runner that uses mocha-htmltest.js to launch each of the polymer element tests(each an html in itself) in an iframe and then destroy it for every test.The results to display are passed to the main window for every test.
In this approach,each polymer element test html running within an iframe imports all the libraries needed(jasmine,platform,polymer).
Isn't this a costly approach to re-construct iframes importing all libraries for each element's test?
Is there any alternate ways for running multiple polymer element tests?
I could not find alternative approaches without one test polluting the other.(Faced issues like being able to listen to polymer-ready only for the first element test)
Can anyone share some thoughts on how you managed to run multiple polymer elements' tests with karma as the test runner?
Thanks,
vj.
We chose the iframe approach because we wanted to write tests in plain HTML without resorting to javascript innerHTML tricks, and we use karma to test in all of our supported browsers. iframes gives us both of our requirements, at the expense of taking a while to run.
I must note that we typically test a number of related things in iframes because the cost is so high. In that sense we use them at somewhere between a "suite" and a "test" in mocha's terminology.
Perhaps at some point in the future, a lighter layer can be made (ES6/7 Realms + ShadowDOM?) that gives us a clean context for our test runs, but the speed hit is not especially heinous to us for now.
When I run my selenium test without the turbolinks gem installed in my Ruby on Rails app, the tests pass. When I include turbolinks, the tests fail. For example if the test starts off
Open /
clickAndWait link=Sign in
type id=session_email any#example.com
Then I will get an error
"[error]Element id=session_email not found.
When I look at the page source, the session_email id is still there with turbolinks installed. I found this page, http://www.digitalkingdom.org/rlp/tiki-index.php?page=Selenium+And+Javascript, which seems to indicate there could be a problem with detecting the page has fully loaded.
Is there away to fix this without changing hundreds of lines in my test suites? If not, is there a reliable selenium method that can test that a turbolinked page has fully loaded?
After some help with the github turbolinks-compatability project, I am able to provide a partial answer to this question.
If the turbolinks gem is being used, then you will need to modify your selenium test cases in order to make sure the page is really loaded. For example, if your test has the following code in it
Open /
clickAndWait link=Sign in
type id=session_email any#example.com
then it needs to be modified to
Open /
click link=Sign in
waitForElementPresent id=session_email
type id=session_email any#example.com
There are a number of "waitFor" modifiers you can used, depending on what is the feature on the page you want to test next.
However, if the test involves a javascript pop up, then you should not add a waitFor command. So for example if you have at test like
clickAndWait link=Delete
assertConfirmation Are you Sure?
you should not modify the code. Indeed adding a waitFor test hangs execution in the case of javascript popups.
This solution involves line-by-line manual modification of the code. I have opened up an issue on the Selenium Users group to see if there is some better way to handle this problem.
I have a single-page web app that presents a multi-step photo management "wizard", split up across several discrete steps (photo upload, styling, annotation, publishing) via a tab strip. On switching steps I set the URL hash to #publishing-step (or whichever step was activated).
How do I set up Optimizely tests to run on the various discrete steps of the wizard?
The browser never leaves the page, so it only gets a single window.load event. Its DOM isn't getting scrapped or regenerated, but just switches what page elements are visible at any one time via display: none or block, so the part I am trying to figure out is really mostly about in what way I go about the Optimizely test setup itself - it's fine (and likely necessary) if all edits get applied at once.
This thing unfortunately has to work in IE9, so I can't use history.pushState to get pretty discrete urls for each step.
There's actually several ways you could go about doing this, and which option you choose will largely depend on what's easiest for you AND how you plan to analyze the data.
If you want to use Optimizely's analytics dashboard:
I would recommend creating one experiment which will activate a bunch of other experiments at different times. The activation experiment will be targeted to everyone and run immediately when they get to your wizard. The other experiments will be set up with manual activation and triggered by this experiment.
The activation experiment would have code like:
window.optimizely = window.optimizely || [];
function hashChanged() {
if(location.hash === 'publishing-step') {
window.optimizely.push(['activate', 0000000000]);
}
if(location.hash === 'checkout-step') {
window.optimizely.push(['activate', 1111111111]);
}
}
window.addEventListener('hashchange', hashChanged, false);
Or you could call window.optimizely.push(['activate', xxxxxxxxx]); directly from your site's code instead of creating an activation experiment and listening for hashchange.
If you want to use a 3rd party analytics tool like Google Analytics:
You could do this all in one experiment with code similar to above, but in each "if" section instead of activating an experiment, you could run your variation code that makes changes to the wizard and sends special tracking information to your analytics sweet for later reporting. You'll have to do your own statistical significance calculation for this method (as Optimizely's data won't be "clean"), but this method actually works out better usually if properly configured.
Alternatively you could use the method outlined above but still try to use the Optimizely analytics dashboard by creating custom events on your experiment and sending data to them using calls like window.optimizely.push(["trackEvent", "eventName"]);
This article may also be helpful to you.
You'll probably need to do this yourself, using Optimizely's JS API to trigger actions on their end and tell it what your users did: https://www.optimizely.com/docs/api
I was googling a lot in order to find a solution for my problems with UI Automation. I found a post that nice summarizes the issues:
There's no way to run tests from the command line.(...)
There's no way to set up or reset state. (...)
Part of the previous problem is that UI Automation has no concept of discrete tests. (...)
There's no way to programmatically retrieve the results of the test run. (...)
source: https://content.pivotal.io/blog/iphone-ui-automation-tests-a-decent-start
Problem no. 3 can be solved with jasmine (https://github.com/pivotal/jasmine-iphone)
How about other problems? Have there been any improvements introduced since that post (July 20, 2010)?
And one more problem: is it true that the only existing method for selecting a particular UI element is adding an accessibility label in the application source code?
While UI Automation has improved since that post was made, the improvements that I've seen have all been related to reliability rather than new functionality.
He brings up good points about some of the issues with using UI Automation for more serious testing. If you read the comments later on, there's a significant amount of discussion about ways to address these issues.
The topic of running tests from the command line is discussed in this question, where a potential solution is hinted at in the Apple Developer Forums. I've not tried this myself.
You can export the results of a test after it is run, which you could parse offline.
Finally, in regards to your last question, you can address UI elements without assigning them an accessibility label. Many common UIKit controls are accessible by default, so you can already target them by name. Otherwise, you can pick out views from their location in the display hierarchy, like in the following example:
var tableView = mainWindow.tableViews()[0];
As always, if there's something missing from the UI Automation tool that is important to you, file an enhancement request so that it might find its way into the next version of the SDK.
Have you tried IMAT? https://code.intuit.com/sf/sfmain/do/viewProject/projects.ginsu . It uses the native javascript sdk that Apple provides and can be triggered via command line or Instruments.
In response to each of your questions:
There's no way to run tests from the command line.(...)
Apple now provides this. With IMAT, you can kick off tests via command line or via Instruments. Before Apple provided the command line interface, we were using AppleScript to bring up Instruments and then kick off the tests - nasty.
There's no way to set up or reset state. (...)
Check out this state diagram: https://code.intuit.com/sf/wiki/do/viewPage/projects.ginsu/wiki/RecoveringFromTestFailures
Part of the previous problem is that UI Automation has no concept of discrete tests. (...)
Agreed. Both IMAT and tuneup.js (https://github.com/alexvollmer/tuneup_js#readme) allow for this.
There's no way to programmatically retrieve the results of the test run. (...)
Reading the trailing plist file is not trivial. IMAT provides a jUnit like report after a test run by reading the plist file and this is picked up by my CI Tool (Teamcity, Jenkins, CruiseControl)
Check out http://lemonjar.com/blog/?p=69
It talks about how to run UIA from the command line
Try to check the element hierarchy, the table can be placed over a UIScrollView.
var tableV = mainWindowTarget.scrollViews()[0].tableViews()[0].scrollToElementWithName("Name of element inside the cell");
the above script will work even the element is in 12th cell(but the name should be exactly the same as mentioned inside the cell)