Running Protractor cucumber in parallel with consolidated report - protractor

This may sound duplicate but it is not.
I know that I can use the below configuration in config file and start multiple instance of the chrome driver that would run the features in parallel that share the step definitions.
capabilities: {
'browserName': 'chrome',
'shardTestFiles': true,
'maxInstances': 0
},
Q1. But my question is around why the chromedriver doesn't exit when a scenario fails?(That happens only when I use value of maxInstance > 0 ).
The chromedriver exit with exitcode- 3 and exitcode- 1.
Q2. Is anyone able to sort out the reporting issue? How can I generate the report when all the features have finished?
Any sort of help will be appreciated?
Thanks

In order to generate the consolidated html report after parallel run,I have used afterLaunch parameter in the protractor.conf.js file and have used https://github.com/gkushang/cucumber-html-reporter. Below is the code-
afterLaunch: function afterLaunch () {
var cucumberHtmlReporter = require('cucumber-html-reporter');
var jsonReportFolder = '/path/to/all/json/reports';
var cucumberhtmlReport = path.join(jsonReportFolder, cucumber.html');
var options = {
theme: 'bootstrap',
jsonDir: jsonReportFolder,
output: cucumberhtmlReport,
reportSuiteAsScenarios: true,
launchReport: true
};
cucumberHtmlReporter.generate(options);
}

The existing behavior is correct.Do not use 'maxInstances': 0
The default value is 1 and any value>1 is the right way to do it. The error that you are seeing is because thats how the source code - taskScheduler
They are handling shard tests in this taskScheduler exports and logic of maxinstances is as below
this.maxInstance = capabilities.maxInstances || 1;
/**
* Get maximum number of concurrent tasks required/permitted.
*
* #return {number}
*/
count += Math.min(queue.maxInstance, queue.specLists.length);
So if you have maxInstances 0, It will cause problems and your code will never exit cleanly. Also I dont think your code will run in parallel
What I would suggest is
Check your protractor version and update to latest
Change your config file to - 'maxInstances': 3 //anything greater than 1. 1 is default

Related

Purpose of minSupported and maxSupported parameters in getVersion API

I find getVersion API to be a bit hard to grasp. After some manual experiments with workflow changes, I found out that it's perfectly fine to have such a piece of code:
val version = Workflow.getVersion("change#1", 1, 1);
val anotherVersion = Workflow.getVersion("change#2", 2, 2);
Does it mean that the integer version is assigned to a changeId and not workflow instance? Does a single workflow instance/execution keep a set of integer-based versions?
What is the purpose of minSupported and maxSupported parameters? Why simply not to use an API like below?
val version = Workflow.getVersion("change#1")
if (version) {
// code after "change#1" changes
} else {
// code before "#change#1" changes
}
You are correct, the version is assigned to a changeId not a workflow instance. This allow versioning each piece of the workflow code independently. It allows fixing bugs while workflow is already running and didn't reach that part of the code.
The main reason is validation. The getVersion call records in the workflow history maxVersion when the code was executed for the first time. So on replay the correct version is used to guarantee correct replay even if the maxVersion has changed. When a branch is removed the minVersion is incremented. Imagine that such code is deployed by mistake when there is a workflow that needs the removed branch. The getVersion is going to detect that minVersion is larger than the one recorded in the history and is going to fail the decision task essentially blocking the workflow execution instead of breaking it. The same happens if the recorded version is higher than the maxVersion argument.
Update: Answer to the comment
In other words, I'm trying to come up with a situation where using
many different changeIds and not exceeding maxVersion=1 is not enough
They are enough if you don't perform removal of branches. But if you do then having validation of the minimal version is very convenient. For example look at the following code:
val version = Workflow.getVersion("change", 0, 2);
if (version == DEFAULT_VERSION) {
// before change
} else if (version == 1) {
// first change
} else {
// second hange
}
Let's remove the default version:
val version = Workflow.getVersion("change", 1, 2);
if (version == 1) {
// first change
} else {
// second hange
}
Now look at the version without min and max:
var version1 = Workflow.getVersion("change1");
var version2 = Workflow.getVersion("change2");
if (version1 == DEFAULT_VERSION) {
// before change
} else if (version2 == DEFAULT_VERSION) {
// first change
} else {
// second hange
}
Let's remove the default branch:
var version2 = Workflow.getVersion("change2");
if (version2 == DEFAULT_VERSION) {
// first change
} else {
// second hange
}
Note that a workflow that used the last sample code is going to break in unpredictable way if it is routed by mistake to a worker that doesn't know about version2, but only about the original default version. The first example with min max version is going to detect the issue gracefully.

How to run e file one by one? Not in parallel test

I am new to specman, I am now writing a testbench which i want to give many specific test cases to debug a calculator.
For example,
I have two files, the first one called "test1" and the second called "test2".
Here is my code for "test1":
extend instruction_s {
keep cmd_in_1 == ADD;
keep din1_1 < 10;
keep din2_1 < 10;
};
extend driver_u {
keep instructions_to_drive.size() == 10;
};
And here is my code for "test2":
extend instruction_s {
keep cmd_in_1 == SUB;
keep din1_1 < 10;
keep din2_1 < 10;
};
extend driver_u {
keep instructions_to_drive.size() == 10;
};
However, when I tried to test my code, specman shows error, it seems I can't do this like that.
Is there any possible way that I can let specman execute "test1" file first and then run "test2" file?
Or if there is some other way that I can achieve my goal?
Thanks for your helping.
Do you really want to have one test that executes 10 ADD instructions, and run another test that executes 10 SUB instructions?
If so, the common way to do so is to compile your testbench, and run multiple times - each time loading another test file.
For a start, try this:
xrun my_device.v my_testbench.e test1.e
xrun my_device.v my_testbench.e test2.e

How to read gulp task names from command params?

I have the following code, which reads what task name I passed to gulp: release or test and decides what task group to load from the files based on that.
var argv = require('yargs').argv;
var group = argv._[0];
var groups = {
"release": ["tasks/release/*.js", , "tasks/release/deps.json"],
"test": ["tasks/test/*.js", "tasks/test/deps.json"]
};
require("gulp-task-file-loader").apply(null, groups[group]);
Isn't there a better way to get the commanded tasks from gulp itself instead of using yargs?
I found a great tutorial about tools for CLI. According to it I should use commander, so I do so. It is much better than yargs. Another possible solution to use process.argv[2] in this case, but it is much better to use a parser in long term.
var program = require("commander");
program.parse(process.argv);
var group = program.args[0];
var groups = {
"release": ["tasks/release/*.js", , "tasks/release/deps.json"],
"test": ["tasks/test/*.js", "tasks/test/deps.json"]
};
require("gulp-task-file-loader").apply(null, groups[group]);

Why doesn't karma-cli accept files as command line argument?

I'm using the config from my project but would like to run karma just for one specific test script one time. I don't want to have to create a whole new config file just for this case and would prefer just passing in the script I want run (so basically telling karma to use files: ['myTest.js'].
But there don't seem to be any options for that AFAICT in the docs. Why would this be missing? It seems like a fundamental feature IMO.
in karma.conf something like that:
function mergeFilesWithArgv(staticFiles) {
var source = staticFiles, argv = process.argv;
argv.forEach(function (arg) {
var index = arg.indexOf('--check=');
if (index !== -1) {
source.push(arg.substring(8));
}
});
return source;
}
config.set({
...
files: mergeFilesWithArgv([
'js_src/tests/*.test.js'
]),
...
});
use: karma start --check='./path/to/file.js'
or for multiple files: karma start --check='./path/to/file.js' --check='/another/path/to/another/file.js'

How to ensure only one job fires at a time in Quartz.NET?

I have a Windows Service that uses Quartz.NET to execute jobs that are scheduled. I only want it to pick up a single job at a time. However, occasionally I am seeing behavior that indicates that it has picked up two jobs at once.
There are two log files (the regular one and one automatically generated when the regular one is in use) with jobs that start at the exact same time. I can see both jobs executing in the QRTZ_FIRED_TRIGGERS table, but only one has the correct instance ID, which is odd.
I have configured Quartz to use only a single thread. Is this not how you tell it to only pick up a single job at a time?
Here is my quartz.config file with sensitive values hashed out:
quartz.scheduler.instanceName = DefaultQuartzJobScheduler
quartz.scheduler.instanceId = ######################
quartz.jobstore.clustered = true
quartz.jobstore.clusterCheckinInterval = 15000
quartz.threadPool.type = Quartz.Simpl.SimpleThreadPool, Quartz
quartz.jobStore.useProperties = false
quartz.jobStore.type = Quartz.Impl.AdoJobStore.JobStoreTX, Quartz
quartz.jobStore.driverDelegateType = Quartz.Impl.AdoJobStore.OracleDelegate, Quartz
quartz.jobStore.tablePrefix = QRTZ_
quartz.jobStore.lockHandler.type = Quartz.Impl.AdoJobStore.UpdateLockRowSemaphore, Quartz
quartz.jobStore.misfireThreshold = 60000
quartz.jobStore.dataSource = default
quartz.dataSource.default.connectionString = ######################
quartz.dataSource.default.provider = OracleClient-20
# Customizable values per Node
quartz.threadPool.threadCount = 1
quartz.threadPool.threadPriority = Normal
Make the threadcount = 1.
<add key="quartz.threadPool.threadCount" value="1"/>
<add key="quartz.threadPool.threadPriority" value="Normal"/>
(as you have done)
Make each of your jobs "Stateful"
[PersistJobDataAfterExecution]
[DisallowConcurrentExecution]
public class StatefulDoesNotRunConcurrentlyJob : IJob /* : IStatefulJob */ /* Error 43 'Quartz.IStatefulJob' is obsolete: 'Use DisallowConcurrentExecutionAttribute and/or PersistJobDataAfterExecutionAttribute annotations instead. */
{
}
I've left in the name of the ~~older~~ version of how to do this (namely, the "IStatefulJob") and the error message that is generated when you code to the outdated "IStatefulJob" interface. But the error message gives the hint.
Basically, if you have 1 thread AND every job is marked with "DisallowConcurrentExecution", it should result in 1 job at any given time..running in "serial mode".