How to assert setup outside teardown in multiply tests - pytest

I have a lot of tests, and every test assert something specific, but also assert the setup was done as expected. The setup cannot be verified before the tests starts.
My approach is to assert in the tear-down. Is there a better approach where I do not have to add the assert to every test file, since I have hundredths of them.
Minimal example:
my_test.py
import pytest
#pytest.mark.parametrize("number_a", [1, 2, 3, 4, 5, 6, 7])
def test_low_numbers(number_a):
run_test(number_a=number_a)
def run_test(number_a):
assert number_a != 4
conftest.py
def very_important_check(number_a):
assert number_a != 2, "Not two, stupid."
#pytest.fixture(autouse=True, scope="function")
def signal_handler(request, number_a):
print("########Setup")
yield
print("########Teardown")
very_important_check(number_a)

Related

pytest use data from one test as parameter to another test

I am wondering if it is possible to use data generated from one test as parameter to another test. In my case I need to modify variable (this is list) and it will be great if I can use this list as param (run as many tests as list have)
Here is code (it is not working, but maybe You can give me some hints)
import pytest
class TestCheck:
x = [1, 4]
#classmethod
def setup_class(self):
print('here setup')
#classmethod
def teardown_class(self):
print('here is finish')
def test_5(self):
self.x.append(6)
assert 1 == 2
#pytest.mark.parametrize("region", x)
def test_6(self, region):
assert region > 5, f"x: {self.x}"
Output:
FAILED sandbox_tests3.py::TestCheck::test_5 - assert 1 == 2
FAILED sandbox_tests3.py::TestCheck::test_6[1] - AssertionError: x: [1, 4, 6]
FAILED sandbox_tests3.py::TestCheck::test_6[4] - AssertionError: x: [1, 4, 6]
So it looks that in x there is good values, however in fixture new values are not visible.
I was also trying to use pytest_cases, but results are very similar.
Any help is appreciate

pytest mock patch side_effect not iterate when used together with pytest.mark.parametrize

I have the below pytest script and the side_effect value [2, 6] is not getting iterated. It is always stuck with value 2 in the test function test_my_function.
My question is:
How to make the side_effect value iterate together with parametrize test cases in function test_my_function. (assume we must use parametrize).
#!/usr/bin/env python3
#
import pytest
def my_function(x):
return x*2
#pytest.fixture
def mock_my_function(mocker):
mocker.patch(
__name__ + ".my_function", side_effect=[2, 6]
)
#pytest.mark.parametrize("input, expect", [(1, 2), (3, 6)])
def test_my_function(input, expect, mock_my_function):
assert expect == my_function(input)
first, your test isn't really testing anything if you mock the function you're trying to test
second, function-scoped fixtures are set up each time the test function is called -- for each parametrization case-set it'll run your fixture
this means (in your example) that both invocations of your test will have my_function mocked to return 2 for the only call that happens
if you want to additionally parametrize the mocked function, I would suggest including it in your parametrize list:
#pytest.mark.parametrize(
('input', 'expect', 'mocked_ret'),
(
(1, 2, 2),
(3, 6, 6),
),
)
def test_my_function(input, expect, mocked_ret, mocker):
mocker.patch(f"{__name__}.my_function", return_value=mocked_ret)
assert my_function(input) == expect
disclaimer: I'm a pytest core dev

Create parameterized test fixtures that depend on other parameterized test fixtures?

I want to do something like this
def foo_bar(a, b):
""" The function I want to test """
return ...
def subparams(a):
return [some stuff based on what a is]
#pytest.fixture(scope='module', params=('foo', 'bar'))
def a(request):
# lengthy setup based on request.param
yield request.param
# teardown
#pytest.fixture(params=subparams(a))
def b(request):
return request.param
def test_foo_bar(a, b):
result = foo_bar(a, b)
assert <something about result>
In other words, I have a parameterized fixture a which simply takse
a normal list of parameters. For each value of a, I also have a function
subparams for generating associated parameters b. For the sake of argument, let's say that subparams is trivially implemented as follows
def subparams(a):
return [1, 2, 3] if a == 'foo' else [4, 5]
in which case I would want test_foo_bar to be invoked with the following:
'foo', 1
'foo', 2
'foo', 3
'bar', 4
'bar', 5
Unfortunately, the #pytest.fixture(params=subparams(a)) bit doesn't work because at that point a is still just a function, not an instantiation based on a's parameterization. How do I achieve the effect of having test_foo_bar called with those kinds of combinations of a and b where a takes a long time to set up (hence the scope='module' bit) and b is parameterized depending on a?

How can I generate a list of n unique elements picked from a set?

How to generate a list of n unique values (Gen[List[T]]) from a set of values (not generators) using ScalaCheck? This post uses Gen[T]* instead of a set of values, and I can't seem to rewrite it to make it work.
EDIT
At #Jubobs' request I now shamefully display what I have tried so far, revealing my utter novice status at using ScalaCheck :-)
I simply tried to replace gs: Gen[T] repeated parameter to a Set in what #Eric wrote as a solution here:
def permute[T](n: Int, gs: Set[T]): Gen[Seq[T]] = {
val perm = Random.shuffle(gs.toList)
for {
is <- Gen.pick(n, 1 until gs.size)
xs <- Gen.sequence[List[T], T](is.toList.map(perm(_)))
} yield xs
}
but is.toList.map(perm(_)) got underlined with red, with IntelliJ IDEA telling me that "You should read ScalaCheck API first before blind (although intuitive) trial and error", or maybe "Type mismatch, expected: Traversable[Gen[T]], actual List[T]", I can't remember.
I also tried several other ways, most of which I find ridiculous (and thus not worthy of posting) in hindsight, with the most naive being the using of #Eric's (otherwise useful and neat) solution as-is:
val g = for (i1 <- Gen.choose(0, myList1.length - 1);
i2 <- Gen.choose(0, myList2.length - 1))
yield new MyObject(myList1(i1), myList2(i2))
val pleaseNoDuplicatesPlease = permute(4, g, g, g, g, g)
After some testing I saw that pleaseNoDuplicatesPlease in fact contained duplicates, at which point I weighed my options of having to read through ScalaCheck API and understand a whole lot more than I do now (which will inevitably, and gradually come), or posting my question at StackOverflow (after carefully searching whether similar questions exist).
Gen.pick is right up your alley:
scala> import org.scalacheck.Gen
import org.scalacheck.Gen
scala> val set = Set(1,2,3,4,5,6)
set: scala.collection.immutable.Set[Int] = Set(5, 1, 6, 2, 3, 4)
scala> val myGen = Gen.pick(5, set).map { _.toList }
myGen: org.scalacheck.Gen[scala.collection.immutable.List[Int]] = org.scalacheck.Gen$$anon$3#78693eee
scala> myGen.sample
res0: Option[scala.collection.immutable.List[Int]] = Some(List(5, 6, 2, 3, 4))
scala> myGen.sample
res1: Option[scala.collection.immutable.List[Int] = Some(List(1, 6, 2, 3, 4))

scalaz stream structure for growing lists

I have a hunch that I can (should?) be using scalaz-streams for solving my problem which is like this.
I have a starting item A. I have a function that takes an A and returns a list of A.
def doSomething(a : A) : List[A]
I have a work queue that starts with 1 item (the starting item). When we process (doSomething) each item it may add many items to the end of the same work queue. At some point however (after many million items) each subsequent item that we doSomething on will start adding less and less items to the work queue and eventually no new items will be added (doSomething will return Nil for these items). This is how we know the computation will eventually terminate.
Assuming scalaz-streams is appropriate for this could something please give me some tips as to which overall structure or types I should be looking at to implement this?
Once a simple implementation with a single "worker" is done, I would also like to use multiple workers to process queue items in parallel, e.g. having a pool of 5 workers (and each worker would be farming its task to an agent to calculate doSomething) so I would need to handle effects (such as worker failure) as well in this algorithm.
So the answer to the "how?" is :
import scalaz.stream._
import scalaz.stream.async._
import Process._
def doSomething(i: Int) = if (i == 0) Nil else List(i - 1)
val q = unboundedQueue[Int]
val out = unboundedQueue[Int]
q.dequeue
.flatMap(e => emitAll(doSomething(e)))
.observe(out.enqueue)
.to(q.enqueue).run.runAsync(_ => ()) //runAsync can process failures, there is `.onFailure` as well
q.enqueueAll(List(3,5,7)).run
q.size.continuous
.filter(0==)
.map(_ => -1)
.to(out.enqueue).once.run.runAsync(_ => ()) //call it only after enqueueAll
import scalaz._, Scalaz._
val result = out
.dequeue
.takeWhile(_ != -1)
.map(_.point[List])
.foldMonoid.runLast.run.get //run synchronously
Result:
result: List[Int] = List(2, 4, 6, 1, 3, 5, 0, 2, 4, 1, 3, 0, 2, 1, 0)
However, you might notice that:
1) I had to solve termination problem . Same problem for akka-stream and much harder to resolve there as you don't have access to the Queue and no natural back-pressure to guarantee that queue won't be empty just because of fast-readers.
2) I had to introduce another queue for the output (and convert it to the List) as working one is becoming empty at the end of computation.
So, both libraries are not much adapted to such requirements (finite stream), however scalaz-stream (which is going to became "fs2" after removing scalaz dependency) is flexible enough to implement your idea. The big "but" about that is it's gonna be run sequentially by default. There is (at least) two ways to make it faster:
1) split your doSomething into stages, like .flatMap(doSomething1).flatMap(doSomething2).map(doSomething3) and then put another queues between them (about 3 times faster if stages taking equal time).
2) parallelize queue processing. Akka has mapAsync for that - it can do maps in parallel automatically. Scalaz-stream has chunks - you can group your q into chunks of let's say 5 and then process each element inside chunk in parallel. Anyway both solutions (akka vs scalaz) aren't much adapted for using one queue as both input and ouput.
But, again, it's all too complex and pointless as there is a classic simple way:
#tailrec def calculate(l: List[Int], acc: List[Int]): List[Int] =
if (l.isEmpty) acc else {
val processed = l.flatMap(doSomething)
calculate(processed, acc ++ processed)
}
scala> calculate(List(3,5,7), Nil)
res5: List[Int] = List(2, 4, 6, 1, 3, 5, 0, 2, 4, 1, 3, 0, 2, 1, 0)
And here is the parallelized one:
#tailrec def calculate(l: List[Int], acc: List[Int]): List[Int] =
if (l.isEmpty) acc else {
val processed = l.par.flatMap(doSomething).toList
calculate(processed, acc ++ processed)
}
scala> calculate(List(3,5,7), Nil)
res6: List[Int] = List(2, 4, 6, 1, 3, 5, 0, 2, 4, 1, 3, 0, 2, 1, 0)
So, yes I would say that neither scalaz-stream nor akka-streams fits into your requirements; however classic scala parallel collections fit perfectly.
If you need distributed calculations across multiple JVMs - take a look at Apache Spark, its scala-dsl uses the same map/flatMap/fold style. It allows you to work with big collections (by scaling them across JVM's), that don't fit into JVM's memory, so you can improve #tailrec def calculate by using RDD instead of List. It will also give you intruments to process failures inside doSomething.
P.S. So here is why I don't like the idea of using streaming libraries for such tasks. Streaming is more like about infinite streams coming from some external systems (like HttpRequests) not about computation of predefined (even big) data.
P.S.2 If you need reactive-like (without blocking) you might use Future (or scalaz.concurrent.Task) + Future.sequence