How to aggregate test results to publish to testrail after executing pytest tests with xdist? - pytest

I'm running into a problem like this. I'm currently using pytest to run test cases, and reducing execution time using xdist to run tests in parallel and publishing tests results to TestRail. The issue is when using xdist, pytest-testrail plugin creates Test-Run for each xdist workers and then publishes test cases like Untested.
I tried this hook pytest_terminal_summary to prevent pytest_sessionfinish plugin hook from being call multiple times.
I expect only one test run is created, but still multiple test runs are created.

I ran in to the same problem, but found a kind of workaround with duct tape.
I found that all results are collecting properly in test run, if we run the tests with --tr-run-id key.
If you are using jenkins jobs to automate processes, you can do following:
1) create testrun using testrail API
2) get ID of this test run
3) run the tests with --tr-run-id=$TEST_RUN_ID
I used these docs:
http://docs.gurock.com/testrail-api2/bindings-python
http://docs.gurock.com/testrail-api2/reference-runs
from testrail import *
import sys
client = APIClient('URL')
client.user = 'login'
client.password = 'password'
result = client.send_post('add_run/1', {"name": sys.argv[1], "assignedto_id": 1}).get("id")
print(result)
then in jenkins shell
RUN_ID=`python3 testrail_run.py $BUILD_TAG`
and then
python3 -m pytest -n 3 --testrail --tr-run-id=$RUN_ID --tr-config=testrail.cfg ...

Related

Karate- Gatling: Not able to run scenarios based on tags

I am trying to run performance test on scenario tagged as perf from the below feature file-
#tag1 #tag2 #tag3
**background:**
user login
#tag4 #perf
**scenario1:**
#tag4
**scenario2:**
Below is my .scala file setup-
class PerfTest extends Simulation {
val protocol = karateProtocol()
val getTags = scenario("Name goes here").exec(karateFeature("classpath:filepath"))
setUp(
getTags.inject(
atOnceUsers(1)
).protocols(protocol)
)
I have tried passing the tags from command line and as well as passing the tag as argument in exec method in scala setup.
Terminal command-
mvn clean test-compile gatling:test "-Dkarate.env={env}" "-Dkarate.options= --tags #perf"
.scala update:- I have also tried passing the tag as an argument in the karate execute.
val getTags = scenario("Name goes here").exec(karateFeature("classpath:filepath", "#perf"))
Both scenarios are being executed with either approach. Any pointers how i can force only the test with tag perf to run?
I wanted to share the finding here. I realized it is working fine when i am passing the tag info in .scala file.
My scenario with perf tag was a combination of GET and POST call as i needed some data from GET call to pass in POST call. That's why i was seeing both calls when running performance test.
I did not find any reference in karate gatling documentation for passing tags in terminal execution command. So i am assuming that might not be a valid case.

How to use pytest reuse-db correctly

I have broken my head trying to figure out how --reuse-db. I have a super-simple Django project with one model Student and the following test
import pytest
from main.models import Student
#pytest.mark.django_db
def test_1():
Student.objects.create(name=1)
assert Student.objects.all().count() == 1
When I run it for the first time with command pytest --reuse-db, the test passes - and I am not surprised.
But when I run the pytest --reuse-db for the second time, I expect that the db is not destroyed and the test fails, because I expect that Student.objects.all().count() == 2.
I am misunderstanding the --reuse-db flag ?
--reuse-db means to reuse the database between N tests within the same test run.
This flag has no bearing on running pytest twice.

How to run a sub-set of TestCases using --where for nunit

For my project I want to run the exact same test cases twice, once locally and on a different VM in parallel in the cloud (Azure in my case).
I duplicated the TestCase and tagged one Category("Local") and the other Category("Cloud").
Running nunit3 from the console with --where="cat == Cloud" will thus run all TestCases of every test that has one or more TestCases tagged with Category("Cloud").
Is there a different way of only running selected TestCases by a commandline switch?
Simplified example:
[TestCase(TestName = "Canary, Run in cloud."), Category("Cloud")]
[TestCase(TestName = "Canary, Run locally."), Category("Local")]
public void Canary()
{
Assert.True(true);
}
Found a work-around.
Using --params:Cloud=true as command line argument and in the code
private bool ShallRunInCloud => TestContext.Parameters["Cloud"]?.ToLowerInvariant() == "true";

Pytest: collecting 0 items even after following the conventions

I created a test module by following all the conventions, but when I run the test, I get the following message:
collecting 0 items
Here's my directory hierarchy:
integration_tests (Directory)-> tests (Directory)-> test_integration_use_cases.py (python file)
And this is the content of the file:
import pytest
from some_tests.integration_tests.backbone.SomeIntegrationTestBase import SomeIntegrationTestBase
class TestSomeIntegration(SomeIntegrationTestBase):
#pytest.mark.p1
def test_some_integration_use_cases(self):
print("**** Executing integration tests ****")
result = self.execute_test(4)
assert (True == result)
when I run the following command:
pytest test_integration_use_cases.py
I see the following result without any errors:
collecting 0 items
FYI: I am running this on a development machine (Like vagrant)
so I had the same problem as you have even after following all the recommended conventions. My application structure was as follows;
Application
-- API
app.py
-- docs
-- venv
-- tests
-- unit_test
test_factory
...
...
I, however, resolved the issue by moving the tests directory under the API package so that my application structure looked as below;
Application
-- API
app.py
-- tests
-- unit_test
test_factory
...
-- docs
-- venv
...
Although pytest is supposed to auto-discover the tests, it seems to do that if they are placed in the application root. Check out the pytest for flask
I also found this resource helpful.

Nunit-console runner not running any tests

So I am trying to run a powershell script that is triggered by TeamCity to run specific unit tests based on the names of the files that were changed on each github commit.
Here is how I am running it from the command line:
C:\MyFolder\bin\NUnit.ConsoleRunner.3.4.1\tools\nunit3-console.exe "C:\MyFolder\Bin\UnitTesting.dll" --test="MyFolder.QuickTests.DaoTests.ProductDaoTests.ProductBasicTest"
But I keep getting this, it runs it just never runs any tests:
NUnit Console Runner 3.4.1
Copyright (C) 2016 Charlie Poole
Runtime Environment
OS Version: Microsoft Windows NT 10.0.14393.0
CLR Version: 4.0.30319.42000
Test Files
MyFolder\Bin\UnitTesting.dll
Test Filters
Test: MyFolder.QuickTests.DaoTests.ProductDaoTests.ProductBasicTest
Run Settings
WorkDirectory: C:\Users\Me
ImageRuntimeVersion: 4.0.30319
ImageTargetFrameworkName: .NETFramework,Version=v4.0
ImageRequiresX86: False
ImageRequiresDefaultAppDomainAssemblyResolver: False
NumberOfTestWorkers: 2
Test Run Summary
Overall result: Passed
Test Count: 0, Passed: 0, Failed: 0, Inconclusive: 0, Skipped: 0
Start time: 2016-10-17 20:28:43Z
End time: 2016-10-17 20:28:43Z
Duration: 0.303 seconds
Results (nunit3) saved as TestResult.xml
Now when I run it without the --test command like this:
C:\MyFolder\bin\NUnit.ConsoleRunner.3.4.1\tools\nunit3-console.exe "C:\MyFolder\Bin\UnitTesting.dll"
It runs every unit-test that we have, but I don't want to run them all, I want to run specific quick ones, and only run the large ones when we go to staging/production servers so our developers don't have to wait 15 to 20 minutes every time they commit something.
Some additional info:
-My namespace that I am using for this is
MyFolder.QuickTests.DaoTests.ProductDaoTests
The Class I am calling is:
ProductBasicTest
Some of the names like the folder directories were changed because they are %teamcity% placeholders for file directories.
What am I doing wrong to not be able to run specific tests?
For some reason my nunit-console is not recognizing the /run command or /fixture or --test=.
EDIT:
I upgraded to 3.5.0 and am still getting the same issues, I am not able to use --test.
C:\MyFolder\bin\NUnit.ConsoleRunner.3.5.0\tools\nunit3-console.exe "C:\MyFolder\Bin\UnitTesting.dll" --test="MyFolder.QuickTests.DaoTests.ProductDaoTests.ProductBasicTest"
That is the new location, and getting the same issue.
When I do --where for MyFolder it crashes Powershell but doesn't actually run anything.
When I do --explore it does the same as --where for MyFolder and does nothing for MyFolder.QuickTests .
EDIT EDIT:
Thanks to Rob I found the docs here and looked at the --where function with --where "name=ProductBasicTest" which will run all the files in that Test-Suite!
So thanks to Rob one of the issues that I was running into, is it was not recognizing my namespace correctly with QuickTests. So whenever I ran the function it was not running correctly.
To fix this you can go to the Test xml file output and see what names it was running tests under.
To run these individually you can run it by the name with the command:
"nunit3-console.exe C:\PathToDll.dll --where "name = NameOfTest"