How to force pytest return error code error code - pytest

I have following structure:
Koholo job that calling python script, the script returns error code (1 - failed, 0 - passed) as it ends. Koholo wait for the error code to continue to next job step (next scrips).
Now instead of python script I'm running pytest scrips (with command: python -m pytest test_name) but pytest is not returning error code, so the Kohola job timeouts.
Please let me know if there is a way that pytest will return error code as it finish's?

example you can pass any pytest argument that you normaly pass in the cli, i am just using the markers as an example
import sys
import pytest
results = pytest.main(["-m", "my_marker"])
sys.exit(results)
if you want more details
https://docs.pytest.org/en/7.1.x/reference/exit-codes.html

when pytest finish it calls pytest_sessionfinish(session, exitstatus) method.
try to add sys.exit(exitstatus) to this method.
import sys
def pytest_sessionfinish(session, exitstatus):
""" whole test run finishes. """
sys.exit(exitstatus)
also you can check by running this script to check the exit code
start /wait python -m pytest test_name
echo %errorlevel%

Related

pytest-asyncio RuntimeError: Cannot run the event loop while another loop is running

When trying to do UI automtion with pytest-asyncio and pytest-playwright, I got exception like: RuntimeError: Cannot run the event loop while another loop is running
Code structure:
ui2/conftest.py
ui2/test_bing.py
ui2/conftest.py
import pytest
import asyncio
#pytest.fixture(scope="session")
def event_loop():
"""重写event_loop"""
loop = asyncio.get_event_loop()
yield loop
loop.close()
ui2/test_bing.py
import pytest
from playwright.async_api import Page
#pytest.mark.asyncio
async def test_bing(page: Page):
await page.goto("http://www.bing.com")
env:
pytest==7.1.2
pytest-asyncio==0.18.3
pytest-playwright==0.3.0
Detail exception as below:
Because you're importing from the async_api it sounds like you're writing asynchronous integration tests that you want to run concurrently. pytest-asyncio runs coroutine tests serially, so you instead want to use pytest-asyncio-cooperative. (If you wanted the tests run serially you should use from playwright.sync_api import Page.) I'd suggest:
Install pytest-asyncio-cooperative
pip install pytest-asyncio-cooperative
Remove event_loop fixture from ui2/conftest.py. pytest-asyncio-cooperative runs all test coroutines asynchronously on the same event loop implicitly.
Mark your async tests with #pytest.mark.asyncio_cooperative
import pytest
from playwright.async_api import Page
#pytest.mark.asyncio_cooperative
async def test_bing(page: Page):
await page.goto("http://www.bing.com")
Run your tests with the -p no:asyncio option. pytest-aysncio is not compatible with pytest-asyncio-cooperative, so it has to be disabled or uninstalled.
pytest -p no:asyncio
Short solution.
Install nest_asyncio:
pip install nest_asyncio
Then add this to your main conftest.py file:
import nest_asyncio
nest_asyncio.apply()
Find a more detailed explanation over here:
https://pypi.org/project/nest-asyncio/

How to use pytest reuse-db correctly

I have broken my head trying to figure out how --reuse-db. I have a super-simple Django project with one model Student and the following test
import pytest
from main.models import Student
#pytest.mark.django_db
def test_1():
Student.objects.create(name=1)
assert Student.objects.all().count() == 1
When I run it for the first time with command pytest --reuse-db, the test passes - and I am not surprised.
But when I run the pytest --reuse-db for the second time, I expect that the db is not destroyed and the test fails, because I expect that Student.objects.all().count() == 2.
I am misunderstanding the --reuse-db flag ?
--reuse-db means to reuse the database between N tests within the same test run.
This flag has no bearing on running pytest twice.

Unable to execute nested Unix commands in Spark scala

I'm trying to list the folder in aws s3 and get only the filename out of it. The nested unix commands is not getting executed in Spark-shell and throwing error. I know we have other ways to do it by importing org.apache.hadoop.fs._
The command that I'm trying are :
import sys.process._
var cmd_exec = "aws s3 ls s3://<bucket-name>/<folder-name>/"
cmd_exec !!
If I execute it by nesting the cut command to the ls. It's throwing error.
import sys.process._
var cmd_exec = "aws s3 ls s3://<bucket-name>/<folder-name>/ | cut -d' ' -f9-"
cmd_exec !!
Error message: Unknown options: |,cut,-d',',-f9-
java.lang.RuntimeException: Nonzero exit value: 255
Any suggestion please?
AFAIK this is natural.
import scala.sys.process._
val returnValue: Int = Process("cat mycsv.csv | grep -i Lazio")!
above code also wont work...
| is redirect operator to execute another command. so instead of that....
capture the output and execute one more time..
you can see this article - A Scala shell script example as well.. where scala program can be executed as shell script... it might be useful.
TIY!

How to aggregate test results to publish to testrail after executing pytest tests with xdist?

I'm running into a problem like this. I'm currently using pytest to run test cases, and reducing execution time using xdist to run tests in parallel and publishing tests results to TestRail. The issue is when using xdist, pytest-testrail plugin creates Test-Run for each xdist workers and then publishes test cases like Untested.
I tried this hook pytest_terminal_summary to prevent pytest_sessionfinish plugin hook from being call multiple times.
I expect only one test run is created, but still multiple test runs are created.
I ran in to the same problem, but found a kind of workaround with duct tape.
I found that all results are collecting properly in test run, if we run the tests with --tr-run-id key.
If you are using jenkins jobs to automate processes, you can do following:
1) create testrun using testrail API
2) get ID of this test run
3) run the tests with --tr-run-id=$TEST_RUN_ID
I used these docs:
http://docs.gurock.com/testrail-api2/bindings-python
http://docs.gurock.com/testrail-api2/reference-runs
from testrail import *
import sys
client = APIClient('URL')
client.user = 'login'
client.password = 'password'
result = client.send_post('add_run/1', {"name": sys.argv[1], "assignedto_id": 1}).get("id")
print(result)
then in jenkins shell
RUN_ID=`python3 testrail_run.py $BUILD_TAG`
and then
python3 -m pytest -n 3 --testrail --tr-run-id=$RUN_ID --tr-config=testrail.cfg ...

How can I debug a funcargs function?

How can I drop into pdb inside a funcargs function? And how can I see output from print statements in funcargs functions?
My original question included the following, but it turns out I was simply instrumenting the wrong funcarg. Sigh.
I tried:
print "hi from inside funcargs"
invoking with and without -s.
I tried:
import pytest
pytest.set_trace()
And:
import pdb
pdb.set_trace()
And:
raise "hi from inside funcargs"
None produced any output or caused a test failure.
first thing that comes to mind is py.test -s
but by default funcargs give you tracebacks and output/error - what plugins are you using? something is clearly hiding it
for example for the program
def pytest_funcarg__foo(request):
print 'hi'
raise IOError
def test_fun(foo):
pass
a py.test call gives me both - a traceback in the funcarg function and text
To debug a funcarg:
def pytest_funcarg__myfuncarg(request):
import pytest
pytest.set_trace()
...
def test_function(myfuncarg):
...
Then:
python -m pytest test_function.py
As Ronny answered, to see output from a funcarg, pytest -s works.