pytest-xdist indirect fixtures with class scope - pytest

I have some complicated and heavy logic to build a test object and the tests are very long running. They are integration tests and I wanted to try and parallelize them a bit. So i found the pytest-xdist library.
Because of the heavy nature of building the test object, I am using pytests indirection capability on fixtures to build them at test time rather than at collection. Some code I am using for testing can be found below.
#run.py
import pytest
#pytest.mark.parametrize("attribute",(
["pid1", ["pod1", "pod2", "pod3"]],
["pid2", ["pod2", "pod4", "pod5"]]
), indirect=True)
class TestSampleWithScenarios(object):
#pytest.fixture(scope="class")
def attribute(request):
# checkout the pod here
# build the device object and yield
device = {}
yield device
# teardown the device object
# release pod
def test_demo1(self, attribute):
assert isinstance(attribute, str)
def test_demo2(self, attribute):
assert isinstance(attribute, str)
My run command is currently pytest run.py -n 4 --dist=loadscope
When I do no use loadscope, all the tests are sent to their own worker. I do not want this because I would like to only build the device object once and use it for all related tests.
When I use loadscope, all the tests are executed against gw0 and I am not getting any parallelism.
I am wondering if there is any tweaks that I am missing or is this functionality not implemented currently.

Related

JAX pmap with multi-core CPU

What is the correct method for using multiple CPU cores with jax.pmap?
The following example creates an environment variable for SPMD on CPU core backends, tests that JAX recognises the devices, and attempts a device lock.
import os
os.environ["XLA_FLAGS"] = '--xla_force_host_platform_device_count=2'
import jax as jx
import jax.numpy as jnp
jx.local_device_count()
# WARNING:absl:No GPU/TPU found, falling back to CPU. (Set TF_CPP_MIN_LOG_LEVEL=0 and rerun for more info.)
# 2
jx.devices("cpu")
# [CpuDevice(id=0), CpuDevice(id=1)]
def sfunc(x): while True: pass
jx.pmap(sfunc)(jnp.arange(2))
Executing from a jupyter kernel and observing htop shows that only one core is locked
I receive the same output from htop when omitting the first two lines and running:
$ env XLA_FLAGS=--xla_force_host_platform_device_count=2 python test.py
Replacing sfunc with
def sfunc(x): return 2.0*x
and calling
jx.pmap(sfunc)(jnp.arange(2))
# ShardedDeviceArray([0., 2.], dtype=float32, weak_type=True)
does return a SharedDeviecArray.
Clearly I am not correctly configuring JAX/XLA to use two cores. What am I missing and what can I do to diagnose the problem?
As far as I can tell, you are configuring the cores correctly (see e.g. Issue #2714). The problem lies in your test function:
def sfunc(x): while True: pass
This function gets stuck in an infinite loop at trace-time, not at run-time. Tracing happens in your host Python process on a single CPU (see How to think in JAX for an introduction to the idea of tracing within JAX transformations).
If you want to observe CPU usage at runtime, you'll have to use a function that finishes tracing and begins running. For that you could use any long-running function that actually produces results. Here is a simple example:
def sfunc(x):
for i in range(100):
x = (x # x)
return x
jx.pmap(sfunc)(jnp.zeros((2, 1000, 1000)))

Any idea why flaky plugin is not triggered on failed tests decorated with #pytest.mark.flaky(max_runs=...)

I have a pytest suite running in this env:
Test session starts (platform: linux, Python 3.6.1, pytest 3.3.1, pytest-sugar 0.9.1)
plugins: flaky-3.5.3, dependency-0.3.2, forked-0.2, logger-0.4.0, sugar-0.9.1, xdist-1.24.1
I have a parametrized test, decorated with flaky, and it is supposed to be re-run max three times if it fails.
#pytest.mark.flaky(max_runs=3) # re-run this test in case it fails
def test_cucubau(getBauBau_fixture):
assert cucubau(getBauBau_fixture) == True
However, it fails only once, it is not re-run, and my flaky test report is empty.
===Flaky Test Report===
===End Flaky Test Report===
Based on what I read about flaky plugin, the usage should be trivial.. but I'm not able to see what is wrong with my code.
any idea?
I believe you need the pytest-rerunfailures plugin for that to work. Then you should be able to annotate your test with #pytest.mark.flaky(reruns=3).

Pytest does not connect to the database when I run the test selectively

guys, tell me please, what can be caused by such anomalous behavior pytest?
I run so pytest - all tests pass.
I run sample tests, either - pytest tests/gate/test_loan.py :: Tests :: test_pass. Of course, it breaks, it can be seen from the stack that it breaks due to the fact that it can not be attached to the database. The question is how the second type of test run affects the connection from the database.
I understood this behavior.
Problem was relatid with wrong import of db_session.
main.app1.db.py
db_session = scoped_session(sessionmaker())
tests.conftest.py
from app1.db import db_session
right import
tests.conftest.py
from main.app1.db import db_session

Run check before tests

I have tests which are run against a real database (not an in-memory or mocked up one). Before the tests run, I want to check the environment variables to make sure that the TEST database is not the same as the PROD database. If they do match I want to stop the tests.
I thought I could do something like this:
class Main extends Suites (
new EnvironmentTest, // This needs to run first!!
new Tests1,
new Tests2,
new Tests3
)
#DoNotDiscover
class EnvironmentTest extends PlaySpec {
"Environment variables" should {
"PROD SQL DB should be different from TEST SQL DB" in {
// Do check and stop execution
throw new EnvironmentVariableException("TEST and PROD are the same!")
}
}
}
When I run this code, the exception is thrown and the first test fails, but the rest of the tests will run.
Is it possible to stop the execution of the tests (or do a check before hand to prevent them from running at all) ?

Code coverage on Play! project

I have a Play! project where I would like to add some code coverage information. So far I have tried JaCoCo and scct. The former has the problem that it is based on bytecode, hence it seems to give warning about missing tests for methods that are autogenerated by the Scala compiler, such as copy or canEqual. scct seems a better option, but in any case I get many errors during tests with both.
Let me stick with scct. I essentially get errors for every test that tries to connect to the database. Many of my tests load some fixtures into an H2 database in memory and then make some assertions. My Global.scala contains
override def onStart(app: Application) {
SessionFactory.concreteFactory = Some(() => connection)
def connection() = {
Session.create(DB.getConnection()(app), new MySQLInnoDBAdapter)
}
}
while the tests usually are enclosed in a block like
class MySpec extends Specification {
def app = FakeApplication(additionalConfiguration = inMemoryDatabase())
"The models" should {
"be five" in running(app) {
Fixtures.load()
MyModels.all.size should be_==(5)
}
}
}
The line running(app) allows me to run a test in the context of a working application connected to an in-memory database, at least usually. But when I run code coverage tasks, such as scct coverage:doc, I get a lot of errors related to connecting to the database.
What is even more weird is that there are at least 4 different errors, like:
ObjectExistsException: Cache play already exists
SQLException: Attempting to obtain a connection from a pool that has already been shutdown
Configuration error [Cannot connect to database [default]]
No suitable driver found for jdbc:h2:mem:play-test--410454547
Why is that launching tests in the default configuration is able to connect to the database, while running in the context of scct (or JaCoCo) fails to initialize the cache and the db?
specs2 tests run in parallel by default. Play disables parallel execution for the standard unit test configuration, but scct uses a different configuration so it doesn't know not to run in parallel.
Try adding this to your Build.scala:
.settings(parallelExecution in ScctPlugin.ScctTest := false)
Alternatively, you can add sequential to the beginning of your test classes to force all possible run configurations to run sequentially. I've got both in my files still, as I think I had some problems with the Build.scala solution at one point when I was using an early release candidate of Play.
A better option for Scala code coverage is Scoverage which gives statement line coverage.
https://github.com/scoverage/scalac-scoverage-plugin
Add to project/plugins.sbt:
addSbtPlugin("com.sksamuel.scoverage" % "sbt-scoverage" % "1.0.1")
Then run SBT with
sbt clean coverage test
You need to add sequential in the beginning of your Specification.
class MySpec extends Specification {
sequential
"MyApp" should {
//...//
}
}