Can we use the main scenario outline example section data in another feature file, which will be called from main Feature? [duplicate] - ui-automation

I am able to execute WebUI feature file against single browser (Zalenium) using parallel runner and defined driver in karate-config.js. How can we execute WebUI feature file against multiple browsers (Zalenium) using parallel runner or distributed testing?

Use a Scenario Outline and the parallel runner. Karate will run each row of an Examples table in parallel. But you will have to move the driver config into the Feature.
Just add a parallel runner to this sample project and try: https://github.com/intuit/karate/tree/master/examples/ui-test
Scenario Outline: <type>
* def webUrlBase = karate.properties['web.url.base']
* configure driver = { type: '#(type)', showDriverLog: true }
* driver webUrlBase + '/page-01'
* match text('#placeholder') == 'Before'
* click('{}Click Me')
* match text('#placeholder') == 'After'
Examples:
| type |
| chrome |
| geckodriver |
There are other ways you can experiment with, here is another pattern when you have a normal Scenario in main.feature - which you can then call later from a Scenario Outline from a separate "special" feature - which is used only when you want to do this kind of parallel-ization of UI tests.
Scenario Outline: <config>
* configure driver = config
* call read('main.feature')
Examples:
| config! |
| { type: 'chromedriver' } |
| { type: 'geckodriver' } |
| { type: 'safaridriver' } |
EDIT - also see this answer: https://stackoverflow.com/a/62325328/143475
And for other ideas: https://stackoverflow.com/a/61685169/143475
EDIT - it is possible to re-use the same browser instance for all tests and the Karate CI regression test does this, which is worth studying for ideas: https://stackoverflow.com/a/66762430/143475

Related

VS Code Extension Settings CLI

I want to create an automated script for setting up VS Code.
Part of this is the installation of the extensions and configuring them as necessary.
So I was able to install the extensions via CLI, but can't find how to change the extension settings by only using the command line.
For example - I want to change Jest Runner settings. I found this on their readme:
Jest Runner will work out of the box, with a valid Jest config.
If you have a custom setup use the following options to configure Jest Runner:
| Command | Description |
| --- | --- |
| jestrunner.configPath | Jest config path (relative to ${workFolder} e.g. jest-config.json) |
| jestrunner.jestPath | Absolute path to jest bin file (e.g. /usr/lib/node_modules/jest/bin/jest.js) |
| jestrunner.debugOptions | Add or overwrite vscode debug configurations (only in debug mode) (e.g. `"jestrunner.debugOptions": { "args": ["--no-cache"] }`) |
| jestrunner.runOptions | Add CLI Options to the Jest Command (e.g. `"jestrunner.runOptions": ["--coverage", "--colors"]`) https://jestjs.io/docs/en/cli |
| jestrunner.jestCommand | Define an alternative Jest command (e.g. for Create React App and similar abstractions) |
| jestrunner.disableCodeLens | Disable CodeLens feature
| jestrunner.codeLensSelector | CodeLens will be shown on files matching this pattern (default **/*.{test,spec}.{js,jsx,ts,tsx})
But don't know how to access it via cmd.
Any thoughts on how to do this?
Thanks!
Was able to find a solution now.
So it turns out that the settings are actually stored in:
<userFolder>\AppData\Roaming\Code\User\Settings.json
From there I can open up the json file and add in the commands as specified by the extension's readme.

How to parallelize Gauge at specification level?

I'm building a Gauge automation project with Selenium, Maven and Java. When executing a specification with an included table data like
# Specification
| name |
| A |
| B |
| C |
## Scenario 1
* User logs in application
## Scenario 2
* User does something for product <name>
In single thread, it runs:
mvn clean install
Output:
Scenario 1
Scenario 2 for name A
Scenario 2 for name B
Scenario 2 for name C
And then it moves to the next specification.
However, Gauge behaves different when running the same spec in parallel on 2 nodes:
mvn clean install -DinParallel=true -Dnodes=2
Output:
Browser 1: Scenario 1
Browser 2: Scenario 2 for name A
Browser 1: Scenario 2 for name B
Browser 2: Scenario 2 for name C
You can immediately see that the scenarios from Browser 2 will not succeed as the "precondition" from Scenario 1 was not run.
Is there a way to parallelize Gauge at specification level?
Note: I know that rewriting the scenarios to be self-contained is one way to go, but these tests get really long, really fast and increase the running time.
After some experimenting, it turns out that Gauge has 2 different parallelizations, depending on how you write the spec.
With a spec with test data like
# Specification
| name |
| A |
| B |
| C |
## Scenario 1
* User logs in application
## Scenario 2
* User does something for product <name>
the parallelization is done at scenario level, as described in the original question:
mvn clean install -DinParallel=true -Dnodes=2
Output:
Browser 1: Scenario 1
Browser 2: Scenario 2 for name A
Browser 1: Scenario 2 for name B
Browser 2: Scenario 2 for name C
However, when rewriting the specification to incorporate the test data into the steps
# Specification
## Scenario 1
* User logs in application
## Scenario 2 for A
* User does something for product "A"
## Scenario 2 for B
* User does something for product "B"
## Scenario 2 for C
* User does something for product "C"
the output looks something like
mvn clean install -DinParallel=true -Dnodes=2
Output:
Browser 1: Scenario 1
Browser 1: Scenario 2 for name A
Browser 1: Scenario 2 for name B
Browser 1: Scenario 2 for name C
which effectively applies parallelization at spec level rather than scenario level.

Data Driven testing with cucumber protractor

Lets say I have a scenario in my demo.feature file
Scenario Outline: Gather and load all submenus
Given I will login using <username> and <password>
When I will click all links
Examples :
| username | password |
| user1 | pass1 |
| use2 | pass2 |
lets say i have a file called users.json
How can i get those usernames and passwords from that external file to my demo.feature ?
Can I catch the file by passing parameters to my npm script like below ?
npm run cucumber -- --params.environment.file=usernames.json
I recommend having the login step access that json file within the step definition. Just make sure not to check it into the repo and instead always expect it to be in a location but only locally and not in the repository.
Doing the above is useful for a couple of reasons:
- An engineer running your tests does not need to know that a param must be passed in from the command line
- The code is self-descriptive in that step as to how it logs in
- You can add better error handling
- You can use multiple user files if needs be by having hooks define paths etc based on tags

Talend Administration Center linking Job to project

I'm trying to create Project and Task in TAC using MetaServletCaller.bat file.
I'm able to create a project using the bat file, but didn't get how to link or assign jobs to that project.
How to create project with the jobs using MetaServletCaller.bat file?
Talend MetaServletCaller API doesn't provide any command for creating a job from an export file. The only way to do this would be to do it in Talend studio, or programmatically using the commandline importItems command which allows you to import an exported job (while logged in to the project):
| importItems source (dir|.zip) imports items |
| -if (--item-filter) filterExpr item filter expression |
| -im (--implicit) import implicit |
| -o (--overwrite) overwrite existing items |
| -s (--status) import the status |
| -sl (--statslogs) import stats & logs params |
You can find the commandline API reference here.

Using FitNesse to test RESTful APIs using RestFixture & anonymous namespaces

I'm considering using FitNesse to write some acceptance tests for some extensions to a RESTful API. The GET response includes XML in an anonymous namespace, e.g.
<?xml version="1.0" encoding="utf-8"?>
<things xmlns="http://example.com/ns/">
<thing id="1"/>
<thing id="2"/>
</things>
The FitNesse fixture RestFixture seems a good candidate for this. It should allow me to run an XPath to verify the response, but this does not appear to play nicely with anonymous namespaces. The following test will fail because needs the namespace specifying:
|!-smartrics.rest.fitnesse.fixture.RestFixture-!|http://example.com/v1.0/inbox |
|GET | /things | 200 | | //thing |
I can find no way of expressing the XPath such that RestFixture will parse it successfully.
A couple of notes:
(a) You can query attributes because they're not in a namespace. The following passes:
|GET | /things | 200 | | //#id |
(b) An example elsewhere suggested using string matching. This is wrong - the following passes too!
|GET | /things | 200 | | 'complete and utter nonsense' |
RestFixture now support namespaces.
You need to define the namespace context as a key value map of alias/namespace uri using the RestFixtureConfig (this must include an alias for the default namespace too).
Then you can use the aliases therein defined in the xpaths that match the response body of a request, or in the let() command, to extract data from the response.
An example is included in the live documentation of the rest-fixture:
https://github.com/smartrics/RestFixture/downloads (check the downloadable html RestFixture-<ver>.html