I'm trying to use locust as a pytest library to write stress tests, but I have encountered some problems, and I can't solve them after several hours.
There are some assert statements in my pytest. I hope that when the assert statement reports an error, the locust will be stopped immediately and the test will be marked as failed.
class StressRobot(User):
wait_time = between(0.01, 0.1)
__robot = None
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
#task
def execute(self):
try:
logging.debug("do some stress test")
assert False
except Exception as e:
events.request_failure.fire()
#pytest.mark.stress
def test():
env = Environment(user_classes=[StressRobot])
env.create_local_runner()
env.runner.start(10, spawn_rate=10)
gevent.spawn_later(5, lambda: env.runner.quit())
env.runner.greenlet.join()
assert env.stats.num_failures == 0
My code is like the above. I hope that when assert False, the pytest case will end immediately, and assert env.stats.num_failures == 0 will report an error. But he is not over, he will keep running, keep reporting errors, and will not end until 5 seconds later, and finally env.stats.num_failures == 0
A failure does not stop a locust run.
In your task you can call self.environment.runner.quit() (instead of just logging a request failure event)
See https://docs.locust.io/en/stable/writing-a-locustfile.html#environment-attribute
Related
Using:
celery==5.2.7
django-celery-results==2.4.0
django==4.1
pytest==7.1.2
pytest-django==4.5.2
pytest-celery==0.0.0
I'm trying to test a task (start_task) that creates a chord (of N work_task tasks) with a callback task to summarize the work.
def test_function(db):
...
obj = make_obj
...
start_task.delay(obj)
I call start_task which creates a single work_task. The chord never
completes so that the summarize_task gets called. The work_task completes successfully (I can see that in the debugger). When I modify the test to:
def test_function(db, celery_app, celery_worker):
...
obj = make_obj
...
start_task.delay(obj)
The test dies on make_obj because the db connection is already closed.
E psycopg2.InterfaceError: connection already closed
My work around for the moment is to manually call tasks so that celery is not involved, but this does not test the chord mechanisms, only the logic that is invoked by the chord.
If someone has an example
It can be done using UnitTest style tests with pytest. I haven't solved this using native pytest yet. The secret sauce below is to use a TransactionTestCase vs a normal Django TestCase
from django.test import TransactionTestCase, override_settings
#pytest.mark.xdist_group(name="celery")
#override_settings(CELERY_TASK_ALWAYS_EAGER=False)
#override_settings(CELERY_TASK_EAGER_PROPAGATES=False)
class SyncTaskTestCase2(TransactionTestCase):
#classmethod
def setUpClass(cls):
super().setUpClass()
cls.celery_worker = start_worker(app, perform_ping_check=False)
cls.celery_worker.__enter__()
print(f"Celery Worker started {time.time()}")
#classmethod
def tearDownClass(cls):
print(f"Tearing down Superclass {time.time()}")
super().tearDownClass()
print(f"Tore down Superclass {time.time()}")
cls.celery_worker.__exit__(None, None, None)
print(f"Celery Worker torn down {time.time()}")
def test_success(self):
print(f"Starting test at {time.time()}")
self.task = do_average_in_chord.delay()
self.task.get()
print(f"Finished Averaging at {time.time()}")
assert self.task.successful()
cls.celery_work.__exit__(None, None, None) takes about 9 seconds to complete which is not particularly wonderful....
I want to collect information from all my tests, to ensure that I've covered everything, but none of the posts I've come across seem to do this specifically.
If I use e.g. atexit, sessionfinish or other means mentioned when searching for "pytest function after all tests", I seem to lose the ability to use the fixture, and they seem like they're just teardown functions, rather than actual tests.
I want to be able to assert that 1 and 2 are in my fixture list, after running all tests.
import pytest
#pytest.fixture(scope="module")
def fxtr_test_list():
return []
def test_something_1(fxtr_test_list):
fxtr_test_list.append(1)
def test_something_2(fxtr_test_list):
fxtr_test_list.append(2)
#pytest.fixture(scope="session")
def global_check(request, fxtr_test_list):
assert len(fxtr_test_list) == 0 # initial check, should pass
def final_check(request):
assert len(fxtr_test_list) == 0 # final check, should fail
request.addfinalizer(final_check)
return request
You can use fixtures only in tests or other fixtures, so using a fixture in some hook is not possible.
If you don't need a dedicated test, you could just use the fixture itself for testing by making it an autouse-fixture:
import pytest
#pytest.fixture(scope="session")
def fxtr_test_list():
return []
...
#pytest.fixture(scope="session", autouse=True)
def global_check(request, fxtr_test_list):
assert len(fxtr_test_list) == 0 # initial check, should pass
yield
assert len(fxtr_test_list) == 0 # final check, should fail
Note that I changed the scope of the first fixture to "session", otherwise it cannot be used with sesssion-based fixture. Also, I have simplified the second fixture to use the standard setup / yield/ teardown pattern.
This gives you something like:
$ python -m pytest -v test_after_all.py
=================================================
...
collected 2 items
test_after_all.py::test_something_1 PASSED
test_after_all.py::test_something_2 PASSED
test_after_all.py::test_something_2 ERROR
======================================================= ERRORS ========================================================
________________________________________ ERROR at teardown of test_something_2 ________________________________________
request = <SubRequest 'global_check' for <Function test_something_1>>, fxtr_test_list = [1, 2]
#pytest.fixture(scope="session", autouse=True)
def global_check(request, fxtr_test_list):
assert len(fxtr_test_list) == 0 # initial check, should pass
yield
> assert len(fxtr_test_list) == 0 # final check, should fail
E assert 2 == 0
E +2
E -0
...
============================================= 2 passed, 1 error in 0.23s ==============================================
If you really need a dedicated test as the last test, could could use an ordering plugin like pytest-order and mark the test as the last:
#pytest.mark.order(-1)
def test_all_tests(global_check):
...
Within a unit test, I'm using monkeypatch in order to change entries in a dict.
from hypothesis import given, strategies
test_dict = {"first": "text1", "second": "text2"}
given(val=strategies.text())
def test_monkeypath(monkeypatch, val):
monkeypatch.setitem(test_dict, "second", val)
assert isinstance(test_dict["second"], str)
The test passes, but I get a warning when executing the following test code with pytest.
=================================================================================================================== warnings summary ====================================================================================================================
.PyCharm2019.2/config/scratches/hypothesis_monkeypatch.py::test_monkeypath
c:\users\d292498\appdata\local\conda\conda\envs\pybt\lib\site-packages\hypothesis\extra\pytestplugin.py:172: HypothesisDeprecationWarning: .PyCharm2019.2/config/scratches/hypothesis_monkeypatch.py::test_monkeypath uses the 'monkeypatch' fixture, wh
ich is reset between function calls but not between test cases generated by `#given(...)`. You can change it to a module- or session-scoped fixture if it is safe to reuse; if not we recommend using a context manager inside your test function. See h
ttps://docs.pytest.org/en/latest/fixture.html#sharing-test-data for details on fixture scope.
note_deprecation(
-- Docs: https://docs.pytest.org/en/stable/warnings.html
============================================================================================================= 1 passed, 1 warning in 0.30s ==============================================================================================================
Does this mean that the value of the dict will only be changed once, no matter how many test cases will be generated by hypothesis?
I am not sure how to use a context manager in this case. Can somebody please point me in the right direction?
Your problem is that the dict is patched only once for all test invocations, and Hypothesis is warning you about that. If you had any logic before the monkeypatch.setitem line, this would be very bad!
You can work around this by using monkeypatch directly, instead of via a fixture:
from hypothesis import given, strategies
from _pytest.monkeypatch import MonkeyPatch
test_dict = {"first": "text1", "second": "text2"}
#given(val=strategies.text())
def test_monkeypath(val):
assert test_dict["second"] == "text2" # this would fail in your version
with MonkeyPatch().context() as mp:
mp.setitem(test_dict, "second", val)
assert test_dict["second"] == val
assert test_dict["second"] == "text2"
et voila, no warning.
Use the monkeypatch context manager
#given(val=strategies.text())
def test_monkeypath(monkeypatch, val):
with monkeypatch.context() as m:
m.setitem(test_dict, "second", val)
assert isinstance(test_dict["second"], str)
by documentation,
eventually trait
Invokes the passed by-name parameter repeatedly until it either
succeeds, or a configured maximum amount of time has passed, sleeping
a configured interval between attempts.
but fail,
fail to fail a test unconditionally;
so i want to use eventually in order to wait until a successful status arrived, but use fail to fail the test if i already know that the test must to fail
e.g.
converting a video with ffmpeg i will wait until conversion is not completed but if conversion reach "error" status i want to make the test fail
with this test
test("eventually fail") {
eventually (timeout(Span(30, Seconds)), interval(Span(15, Seconds))) {
println("Waiting... ")
assert(1==1)
fail("anyway you must fail")
}
}
i understand that i cannot make a test "fail unconditionally" inside eventually cicle : it looks like eventually will ignore "fail" until timeout.
is this a correct behaviour?
so, in the assertion scalatest documentation, fail should not "fail test unconditionally" but it "throw exception"?
It's the same because the only way to fail a test in Scalatest is to throw an exception.
Look at the source:
def eventually[T](fun: => T)(implicit config: PatienceConfig): T = {
val startNanos = System.nanoTime
def makeAValiantAttempt(): Either[Throwable, T] = {
try {
Right(fun)
}
catch {
case tpe: TestPendingException => throw tpe
case e: Throwable if !anExceptionThatShouldCauseAnAbort(e) => Left(e)
}
}
...
So if you want to get your failure through, you could use pending instead of fail (but of course, the test will be reported as pending, not failed). Or write your own version of eventually which lets more exceptions through.
I'm running a ScalaTest (FlatSpec) suite programmatically, like so:
new MyAwesomeSpec().execute()
Is there some way I can figure out if all tests passed? Suite#execute() returns Unit here, so does not help. Ideally, I'd like to run the whole suite and then get a return value indicating whether any tests failed; an alternative would be to fail/return immediately on any failed test.
I can probably achieve this by writing a new FlatSpec subclass that overrides the Scalatest Suite#execute() method to return a value, but is there a better way to do what I want here?
org.scalatest.Suite also has run function, which returns the status of a single executed test.
With a few tweaking, we can access the execution results of each test. To run a test, we need to provide a Reporter instance. An ad-hoc empty reporter will be enough in our simple case:
val reporter = new Reporter() {
override def apply(e: Event) = {}
}
So, let's execute them:
import org.scalatest.events.Event
import org.scalatest.{Args, Reporter}
val testSuite = new MyAwesomeSpec()
val testNames = testSuite.testNames
testNames.foreach(test => {
val result = testSuite.run(Some(test), Args(reporter))
val status = if (result.succeeds()) "OK" else "FAILURE!"
println(s"Test: '$test'\n\tStatus=$status")
})
This will produce output similar to following:
Test: 'This test should pass'
Status=OK
Test: 'Another test should fail'
Status=FAILURE!
Having access to each test case name and its respective execution result, you should have enough data to achieve your goal.