Any idea why flaky plugin is not triggered on failed tests decorated with #pytest.mark.flaky(max_runs=...) - pytest

I have a pytest suite running in this env:
Test session starts (platform: linux, Python 3.6.1, pytest 3.3.1, pytest-sugar 0.9.1)
plugins: flaky-3.5.3, dependency-0.3.2, forked-0.2, logger-0.4.0, sugar-0.9.1, xdist-1.24.1
I have a parametrized test, decorated with flaky, and it is supposed to be re-run max three times if it fails.
#pytest.mark.flaky(max_runs=3) # re-run this test in case it fails
def test_cucubau(getBauBau_fixture):
assert cucubau(getBauBau_fixture) == True
However, it fails only once, it is not re-run, and my flaky test report is empty.
===Flaky Test Report===
===End Flaky Test Report===
Based on what I read about flaky plugin, the usage should be trivial.. but I'm not able to see what is wrong with my code.
any idea?

I believe you need the pytest-rerunfailures plugin for that to work. Then you should be able to annotate your test with #pytest.mark.flaky(reruns=3).

Related

pytest-xdist indirect fixtures with class scope

I have some complicated and heavy logic to build a test object and the tests are very long running. They are integration tests and I wanted to try and parallelize them a bit. So i found the pytest-xdist library.
Because of the heavy nature of building the test object, I am using pytests indirection capability on fixtures to build them at test time rather than at collection. Some code I am using for testing can be found below.
#run.py
import pytest
#pytest.mark.parametrize("attribute",(
["pid1", ["pod1", "pod2", "pod3"]],
["pid2", ["pod2", "pod4", "pod5"]]
), indirect=True)
class TestSampleWithScenarios(object):
#pytest.fixture(scope="class")
def attribute(request):
# checkout the pod here
# build the device object and yield
device = {}
yield device
# teardown the device object
# release pod
def test_demo1(self, attribute):
assert isinstance(attribute, str)
def test_demo2(self, attribute):
assert isinstance(attribute, str)
My run command is currently pytest run.py -n 4 --dist=loadscope
When I do no use loadscope, all the tests are sent to their own worker. I do not want this because I would like to only build the device object once and use it for all related tests.
When I use loadscope, all the tests are executed against gw0 and I am not getting any parallelism.
I am wondering if there is any tweaks that I am missing or is this functionality not implemented currently.

Protractor tests failing after ugrading to angular5

We have recently upgraded to angular 5. Since then my protractor tests started failing with reason " Error: Timeout - Async callback was not invoked within timeout specified by jasmine.DEFAULT_TIMEOUT_INTERVAL.".
All these tests were working fine before.
Protractor version : 5.2.0
karma version: 1.7.0
Highly appreciate your suggestions.
Thanks
This is a Jasmine timeout, see the Protractor guidance on Jasmine timeouts:
Timeouts from Jasmine
Spec Timeout
If a spec (an 'it' block) takes
longer than the Jasmine timeout for any reason, it will fail.
Looks like: a failure in your test results - timeout: timed out after
30000 msec waiting for spec to complete
Default timeout: 30 seconds
How to change: To change for all specs, add jasmineNodeOpts:
{defaultTimeoutInterval: timeout_in_millis} to your Protractor
configuration file. To change for one individual spec, pass a third
parameter to it: it(description, testFn, timeout_in_millis).
Try to debug your test, instructions here. Following any change, including an upgrade, it's possible your test may be broken; resulting in it hanging beyond the duration of the default Jasmine timeout.
A lazy option would be to increase your Jasmine timeout excessively, to see if your test fails with a different exception.

How to debug JavaScript tests in JHipster applications using Karma?

I have a simple monolithic application generated using JHipster v4.10.1 with front-end using Angular 4.x. To run JavaScript unit tests, as suggested in the documentation I ran
./node_modules/karma/bin/karma start src/test/javascript/karma.conf.js --debug
The command runs the tests, reports coverage summary and exits, whether tests all pass or some test fail does not matter. Test run output does show at one point that the debug server is loaded:
21 11 2017 13:41:20.616:INFO [karma]: Karma v1.7.1 server started at http://0.0.0.0:9876/
But because the command exits, the Karma debug server can not be accessed. How to run tests so that Karma console can be used in browser to debug?
Figured out that the magic flag is actually single-run which seems to be true by default. So the main command to run for JS debug is:
yarn test --single-run=false
which in turn runs
$ karma start src/test/javascript/karma.conf.js --single-run=false
With this the command will only exit with explicit kill e.g. with Ctrl+C or Z. Karma debug console can then be accessed on http://localhost:9876/debug.html (assuming default port is not already busy. If it is, test output should tell you which port was chosen).
Additionally you need to disable minimization (and also istanbul config - not sure why) so that you can breakpoint and step through the .ts code in debugger easily. I figured this is done by making following changes in webpack/webpack.test.js file:
Remove following istanbul config from module.rules array:
{
test: /src[/|\\]main[/|\\]webapp[/|\\].+\.ts$/,
enforce: 'post',
exclude: /(test|node_modules)/,
loader: 'sourcemap-istanbul-instrumenter-loader?force-sourcemap=true'
}
Add minimize: false to the LoaderOptionsPlugin under plugins array:
new LoaderOptionsPlugin({
minimize: false,
options: {
tslint: {
emitErrors: !WATCH,
failOnHint: false
}
}
})

Buildbot slaves priority

Problem
I have set up a latent slave in buildbot to help avoid congestion.
I've set up my builds to run either in permanent slave or latent one. The idea is the latent slave is waken up only when needed but the result is that buildbot randomly selectes one slave or the other so sometimes I have to wait for the latent slave to wake even if the permanent one is idle.
Is there a way to prioritize buildbot slaves?
Attempted solutions
1. Custom nextSlave
Following #david-dean suggestion, I've created a nextSlave function as follows (updated to working version):
from twisted.python import log
import traceback
def slave_selector(builder, builders):
try:
host = None
support = None
for builder in builders:
if builder.slave.slavename == 'host-slave':
host = builder
elif builder.slave.slavename == 'support-slave':
support = builder
if host and support and len(support.slave.slave_status.runningBuilds) < len(host.slave.slave_status.runningBuilds):
log.msg('host-slave has many running builds, launching build in support-slave')
return support
if not support:
log.msg('no support slave found, launching build in host-slave')
elif not host:
log.msg('no host slave found, launching build in support-slave')
return support
else:
log.msg('launching build in host-slave')
return host
except Exception as e:
log.err(str(e))
log.err(traceback.format_exc())
log.msg('Selecting random slave')
return random.choice(buildslaves)
And then passed it to BuilderConfig.
The result is that I get this in twistd.log:
2014-04-28 11:01:45+0200 [-] added buildset 4329 to database
But the build never starts, in the web UI it always appear as Pending and none of the logs I've put appear in twistd.log
2. Trying to mimic default behavior
I've having a look to buildbot code, to see how it is done by default.
in file ./master/buildbot/process/buildrequestdistributor.py, class BasicBuildChooser you have:
self.nextSlave = self.bldr.config.nextSlave
if not self.nextSlave:
self.nextSlave = lambda _,slaves: random.choice(slaves) if slaves else None
So I've set exactly that lambda function in my BuilderConfig and I'm getting exactly the same build not starting result.
You can set up a nextSlave function to assign slaves to a builder in a custom manner see: http://docs.buildbot.net/current/manual/cfg-builders.html#builder-configuration

Code coverage on Play! project

I have a Play! project where I would like to add some code coverage information. So far I have tried JaCoCo and scct. The former has the problem that it is based on bytecode, hence it seems to give warning about missing tests for methods that are autogenerated by the Scala compiler, such as copy or canEqual. scct seems a better option, but in any case I get many errors during tests with both.
Let me stick with scct. I essentially get errors for every test that tries to connect to the database. Many of my tests load some fixtures into an H2 database in memory and then make some assertions. My Global.scala contains
override def onStart(app: Application) {
SessionFactory.concreteFactory = Some(() => connection)
def connection() = {
Session.create(DB.getConnection()(app), new MySQLInnoDBAdapter)
}
}
while the tests usually are enclosed in a block like
class MySpec extends Specification {
def app = FakeApplication(additionalConfiguration = inMemoryDatabase())
"The models" should {
"be five" in running(app) {
Fixtures.load()
MyModels.all.size should be_==(5)
}
}
}
The line running(app) allows me to run a test in the context of a working application connected to an in-memory database, at least usually. But when I run code coverage tasks, such as scct coverage:doc, I get a lot of errors related to connecting to the database.
What is even more weird is that there are at least 4 different errors, like:
ObjectExistsException: Cache play already exists
SQLException: Attempting to obtain a connection from a pool that has already been shutdown
Configuration error [Cannot connect to database [default]]
No suitable driver found for jdbc:h2:mem:play-test--410454547
Why is that launching tests in the default configuration is able to connect to the database, while running in the context of scct (or JaCoCo) fails to initialize the cache and the db?
specs2 tests run in parallel by default. Play disables parallel execution for the standard unit test configuration, but scct uses a different configuration so it doesn't know not to run in parallel.
Try adding this to your Build.scala:
.settings(parallelExecution in ScctPlugin.ScctTest := false)
Alternatively, you can add sequential to the beginning of your test classes to force all possible run configurations to run sequentially. I've got both in my files still, as I think I had some problems with the Build.scala solution at one point when I was using an early release candidate of Play.
A better option for Scala code coverage is Scoverage which gives statement line coverage.
https://github.com/scoverage/scalac-scoverage-plugin
Add to project/plugins.sbt:
addSbtPlugin("com.sksamuel.scoverage" % "sbt-scoverage" % "1.0.1")
Then run SBT with
sbt clean coverage test
You need to add sequential in the beginning of your Specification.
class MySpec extends Specification {
sequential
"MyApp" should {
//...//
}
}