How to parallelize Gauge at specification level? - ui-automation

I'm building a Gauge automation project with Selenium, Maven and Java. When executing a specification with an included table data like
# Specification
| name |
| A |
| B |
| C |
## Scenario 1
* User logs in application
## Scenario 2
* User does something for product <name>
In single thread, it runs:
mvn clean install
Output:
Scenario 1
Scenario 2 for name A
Scenario 2 for name B
Scenario 2 for name C
And then it moves to the next specification.
However, Gauge behaves different when running the same spec in parallel on 2 nodes:
mvn clean install -DinParallel=true -Dnodes=2
Output:
Browser 1: Scenario 1
Browser 2: Scenario 2 for name A
Browser 1: Scenario 2 for name B
Browser 2: Scenario 2 for name C
You can immediately see that the scenarios from Browser 2 will not succeed as the "precondition" from Scenario 1 was not run.
Is there a way to parallelize Gauge at specification level?
Note: I know that rewriting the scenarios to be self-contained is one way to go, but these tests get really long, really fast and increase the running time.

After some experimenting, it turns out that Gauge has 2 different parallelizations, depending on how you write the spec.
With a spec with test data like
# Specification
| name |
| A |
| B |
| C |
## Scenario 1
* User logs in application
## Scenario 2
* User does something for product <name>
the parallelization is done at scenario level, as described in the original question:
mvn clean install -DinParallel=true -Dnodes=2
Output:
Browser 1: Scenario 1
Browser 2: Scenario 2 for name A
Browser 1: Scenario 2 for name B
Browser 2: Scenario 2 for name C
However, when rewriting the specification to incorporate the test data into the steps
# Specification
## Scenario 1
* User logs in application
## Scenario 2 for A
* User does something for product "A"
## Scenario 2 for B
* User does something for product "B"
## Scenario 2 for C
* User does something for product "C"
the output looks something like
mvn clean install -DinParallel=true -Dnodes=2
Output:
Browser 1: Scenario 1
Browser 1: Scenario 2 for name A
Browser 1: Scenario 2 for name B
Browser 1: Scenario 2 for name C
which effectively applies parallelization at spec level rather than scenario level.

Related

Can we use the main scenario outline example section data in another feature file, which will be called from main Feature? [duplicate]

I am able to execute WebUI feature file against single browser (Zalenium) using parallel runner and defined driver in karate-config.js. How can we execute WebUI feature file against multiple browsers (Zalenium) using parallel runner or distributed testing?
Use a Scenario Outline and the parallel runner. Karate will run each row of an Examples table in parallel. But you will have to move the driver config into the Feature.
Just add a parallel runner to this sample project and try: https://github.com/intuit/karate/tree/master/examples/ui-test
Scenario Outline: <type>
* def webUrlBase = karate.properties['web.url.base']
* configure driver = { type: '#(type)', showDriverLog: true }
* driver webUrlBase + '/page-01'
* match text('#placeholder') == 'Before'
* click('{}Click Me')
* match text('#placeholder') == 'After'
Examples:
| type |
| chrome |
| geckodriver |
There are other ways you can experiment with, here is another pattern when you have a normal Scenario in main.feature - which you can then call later from a Scenario Outline from a separate "special" feature - which is used only when you want to do this kind of parallel-ization of UI tests.
Scenario Outline: <config>
* configure driver = config
* call read('main.feature')
Examples:
| config! |
| { type: 'chromedriver' } |
| { type: 'geckodriver' } |
| { type: 'safaridriver' } |
EDIT - also see this answer: https://stackoverflow.com/a/62325328/143475
And for other ideas: https://stackoverflow.com/a/61685169/143475
EDIT - it is possible to re-use the same browser instance for all tests and the Karate CI regression test does this, which is worth studying for ideas: https://stackoverflow.com/a/66762430/143475

Data Driven testing with cucumber protractor

Lets say I have a scenario in my demo.feature file
Scenario Outline: Gather and load all submenus
Given I will login using <username> and <password>
When I will click all links
Examples :
| username | password |
| user1 | pass1 |
| use2 | pass2 |
lets say i have a file called users.json
How can i get those usernames and passwords from that external file to my demo.feature ?
Can I catch the file by passing parameters to my npm script like below ?
npm run cucumber -- --params.environment.file=usernames.json
I recommend having the login step access that json file within the step definition. Just make sure not to check it into the repo and instead always expect it to be in a location but only locally and not in the repository.
Doing the above is useful for a couple of reasons:
- An engineer running your tests does not need to know that a param must be passed in from the command line
- The code is self-descriptive in that step as to how it logs in
- You can add better error handling
- You can use multiple user files if needs be by having hooks define paths etc based on tags

Start OrientDB without user input

I'm attempting to start OrientDB in distributed mode on AWS.
I have an auto scaling group that creates new nodes as needed. When the nodes are created, they start with a default config without a node name. The idea is that the node name is generated randomly.
My problem is that the server starts up and ask for user input.
+---------------------------------------------------------------+
| WARNING: FIRST DISTRIBUTED RUN CONFIGURATION |
+---------------------------------------------------------------+
| This is the first time that the server is running as |
| distributed. Please type the name you want to assign to the |
| current server node. |
| |
| To avoid this message set the environment variable or JVM |
| setting ORIENTDB_NODE_NAME to the server node name to use. |
+---------------------------------------------------------------+
Node name [BLANK=auto generate it]:
I don't want to set the node name because I need a random name and the server never starts because it's waiting for user input.
Is there a parameter I can pass to dserver.sh that will pass this check and generate a random node name?
You could create a random string to pass to OrientDB as node name with the ORIENTDB_NODE_NAME variable. Example:
ORIENTDB_NODE_NAME=$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 32 | head -n 1)
For more information about this, look at: https://gist.github.com/earthgecko/3089509

How to debug "Sugar CRM X Files May Only Be Used With A Sugar CRM Y Database."

Sometimes one gets a message like:
Sugar CRM 6.4.5 Files May Only Be Used With A Sugar CRM 6.4.5 Database.
I am wondering how Sugar determines what version of the database it is using. In the above case, I get the following output:
select * from config where name='sugar_version';
+----------+---------------+-------+
| category | name | value |
+----------+---------------+-------+
| info | sugar_version | 6.4.5 |
+----------+---------------+-------+
1 row in set (0.00 sec)
cat config.php |grep sugar_version
'sugar_version' => '6.4.5',
Given the above output, I am wondering how to debug the output "Sugar CRM 6.4.5 Files May Only Be Used With A Sugar CRM 6.4.5 Database.": Sugar seems to think the files are not of version 6.4.5 even though the sugar_version is 6.4.5 in config.php; where should I look next?
Two options for the issue:
Option 1: Update your database for the latest version.
Option 2: Follow the steps below and change the SugarCRM cnfig version.
mysql> select * from config where name ='sugar_version';
+----------+---------------+---------+----------+
| category | name | value | platform |
+----------+---------------+---------+----------+
| info | sugar_version | 7.7.0.0 | NULL |
+----------+---------------+---------+----------+
1 row in set (0.00 sec)
Update your sugarcrm version to apporipriate :
mysql> update config set value='7.7.1.1' where name ='sugar_version';
Query OK, 1 row affected (0.00 sec)
Rows matched: 1 Changed: 1 Warnings: 0
The above commands seem to be correct. Sugar seems to check that config.php and the config table in the database contain the same version. In my case I was making the mistake of using the wrong database -- so if you're like me and tend to have your databases mixed up, double check in config.php that 'dbconfig' is indeed pointing to the right database.

Using FitNesse to test RESTful APIs using RestFixture & anonymous namespaces

I'm considering using FitNesse to write some acceptance tests for some extensions to a RESTful API. The GET response includes XML in an anonymous namespace, e.g.
<?xml version="1.0" encoding="utf-8"?>
<things xmlns="http://example.com/ns/">
<thing id="1"/>
<thing id="2"/>
</things>
The FitNesse fixture RestFixture seems a good candidate for this. It should allow me to run an XPath to verify the response, but this does not appear to play nicely with anonymous namespaces. The following test will fail because needs the namespace specifying:
|!-smartrics.rest.fitnesse.fixture.RestFixture-!|http://example.com/v1.0/inbox |
|GET | /things | 200 | | //thing |
I can find no way of expressing the XPath such that RestFixture will parse it successfully.
A couple of notes:
(a) You can query attributes because they're not in a namespace. The following passes:
|GET | /things | 200 | | //#id |
(b) An example elsewhere suggested using string matching. This is wrong - the following passes too!
|GET | /things | 200 | | 'complete and utter nonsense' |
RestFixture now support namespaces.
You need to define the namespace context as a key value map of alias/namespace uri using the RestFixtureConfig (this must include an alias for the default namespace too).
Then you can use the aliases therein defined in the xpaths that match the response body of a request, or in the let() command, to extract data from the response.
An example is included in the live documentation of the rest-fixture:
https://github.com/smartrics/RestFixture/downloads (check the downloadable html RestFixture-<ver>.html