I'm running pytests on different controllers.
I have defined markers for each of my test targets
#pytest.mark.pcm33
#pytest.mark.pcm21
#pytest.mark.pcm62
#pytest.mark.pcm52
#pytest.mark.pcm61
def test_something():
So when I run pytest -m pcm62 the test test_something is executed.
Now I have a test which must be executed on all my controllers, except for pcm62.
How can I negate this, so the test is always executed, except for pcm62, something like this
#pytest.notmark.pcm62
def test_something_except_for_pcm62():
But all that I can do is this:
#pytest.mark.pcm33
#pytest.mark.pcm21
##pytest.mark.pcm62
#pytest.mark.pcm52
#pytest.mark.pcm61
def test_something_except_for_pcm62():
Notice that the more controllers I get to support, the longer my list of marker decorators gets.
pytest -m "not pcm62" does not do the trick because then I can not use the initial test_something in the same script.
Related
I want to disable capture by default in a pytest plugin I'm working on. This can be done using an "ini" file normally as below:
[pytest]
addopts = -s
In the pytest docs they discuss "addini", and this works for other values I've tried but not addopts.
https://docs.pytest.org/en/latest/reference/reference.html#pytest.Parser.addini
https://docs.pytest.org/en/latest/reference/reference.html#pytest.hookspec.pytest_addoption
An alternative way to approach is to override the argument directly like so:
def pytest_addoption(parser):
parser.addoption(
"--capture",
dest="capture",
help="Capture the output",
default='no'
)
...but this complains that there is an argument conflict with the existing capture argument.
Is there some other better practice for default values of addopts that I haven't thought of?
I'm very new to automation development, and currently starting to write an appium+pytest based Android app testing framework.
I managed to run tests on a connected device using this code, that seems to use unittest:
class demo(unittest.TestCase):
reportDirectory = 'reports'
reportFormat = 'xml'
dc = {}
driver = None
# testName = 'test_setup_tmotg_demo'
def setUp(self):
self.dc['reportDirectory'] = self.reportDirectory
self.dc['reportFormat'] = self.reportFormat
# self.dc['testName'] = self.testName
self.dc['udid'] = 'RF8MA2GW1ZF'
self.dc['appPackage'] = 'com.tg17.ud.internal'
self.dc['appActivity'] = 'com.tg17.ud.ui.splash.SplashActivity'
self.dc['platformName'] = 'android'
self.dc['noReset'] = 'true'
self.driver = webdriver.Remote('http://localhost:4723/wd/hub',self.dc)
# def test_function1():
# code
# def test_function2():
# code
# def test_function3():
# code
# etc...
def tearDown(self):
self.driver.quit()
if __name__ == '__main__':
unittest.main()
As you can see all the functions are currently within 'demo' class.
The intention is to create several test cases for each part of the app (for example: registration, main screen, premium subscription, etc.). That could sum up to hundreds of test cases eventually.
It seems to me that simply continuing listing them all in this same class would be messy and would give me a very limited control. However I didn't find any other way to arrange my tests while keeping the device connected via appium.
The question is what would be the right way to organize the project so that I can:
Set up the device with appium server
Run all the test suites in sequential order (registration, main screen, subscription, etc...).
Perform the cleaning... export results, disconnect device, etc.
I hope I described the issue clearly enough. Would be happy to elaborate if needed.
Well you have a lot of questions here so it might be good to split them up into separate threads. But first of all you can learn a lot about how Appium works by checking out the documentation here. And for the unittest framework here.
All Appium cares about is the capabilities file (or variable). So you can either populate it manually or white some helper function to do that for you. Here is a list of what can be used.
You can create as many test classes(or suites) as you want and add them together in any order you wish. This helps to break things up into manageable chunks. (See example below)
You will have to create some helper methods here as well, since Appium itself will not do much cleaning. You can use the adb command in the shell for managing android devices.
import unittest
from unittest import TestCase
# Create a Base class for common methods
class BaseTest(unittest.TestCase):
# setUpClass method will only be ran once, and not every suite/test
#classmethod
def setUpClass(cls) -> None:
# Init your driver and read the capabilites here
pass
#classmethod
def tearDownClass(cls) -> None:
# Do cleanup, close the driver, ...
pass
# Use the BaseTest class from before
# You can then duplicate this class for other suites of tests
class TestLogin(BaseTest):
#classmethod
def setUpClass(cls) -> None:
super(TestLogin, cls).setUpClass()
# Do things here that are needed only once (like loging in)
def setUp(self) -> None:
# This is executed before every test
pass
def testOne(self):
# Write your tests here
pass
def testTwo(self):
# Write your tests here
pass
def tearDown(self) -> None:
# This is executed after every test
pass
if __name__ == '__main__':
# Load the tests from the suite class we created
test_cases = unittest.defaultTestLoader.loadTestsFromTestCase(TestLogin)
# If you want do add more
test_cases.addTests(TestSomethingElse)
# Run the actual tests
unittest.TextTestRunner().run(test_cases)
I am trying to test a view created in Postgres, but it is returning an empty result set. However, when testing out the view in an Elixir interactive shell, I get back the expected data. Here are the steps I have taken to create and test the view:
Create a migration:
def up do
execute """
CREATE VIEW example_view AS
...
Create the schema:
import Ecto.Changeset
schema "test_view" do
field(:user_id, :string)
Test:
describe "example loads" do
setup [
:with_example_data
]
test "view" do
query = from(ev in Schema.ExampleView)
IO.inspect Repo.all(query)
end
end
The response back is an empty array []
Is there a setting that I am missing to allow for views to be tested in test?
As pointed out in one of the comments:
iex, mix phx.server... run on the :dev environment and the dev DB
tests use the :test environment and runs on a separate DB
It actually makes a lot of sense because you want your test suite to be reproducible and independent of whatever records that you might create/edit in your dev env.
You can open iex in the :test environment to confirm that your query returns the empty array here too:
MIX_ENV=test iex -S mix
What you'll need is to populate your test DB with some known records before querying. There are at least 2 ways to achieve that: fixtures and seeds.
Fixtures:
define some helper functions to create records in test/support/test_helpers.ex (typically: takes some attrs, adds some defaults and calls some create_ function from your context)
def foo_fixture(attrs \\ %{}) do
{:ok, foo} =
attrs
|> Enum.into(%{name: "foo", bar: " default bar"})
|> MyContext.create_foo()
foo
end
call them within your setup function or test case before querying
side note: you should use DataCase for tests involving the DB. With DataCase, each test is wrapped in its own transaction and any fixture that you created will be rollback-ed at the end of the test, so tests are isolated and independent from each other.
Seeds:
If you want to include some "long-lasting" records as part of your "default state" (e.g. for a list of countries, categories...), you could define some seeds in priv/repo/seeds.exs.
The file should have been created by the phoenix generator and indicate you how to add seeds (typically use Repo.insert!/1)
By default, mix will run those seeds whenever you run mix ecto.setup or mix ecto.reset just after your migrations (whatever the env used)
To apply any changes in seeds.exs, you can run the following:
# reset dev DB
mix ecto.reset
# reset test DB
MIX_ENV=test mix ecto.reset
If you need some seeds to be environment specific, you can always introduce different seed files (e.g. dev_seeds.exs) and modify your mix.exs to configure ecto.setup.
Seeds can be very helpful not only for tests but for dev/staging in the early stage of a project, while you are still tinkering a lot with your schema and you are dropping the DB frequently.
I usually find myself using a mix of both approaches.
I have a pytest suite containing multiple files that test web services. The tests can be run on different types, call them type A and type B and the user can specify which type the tests should be run for. While most tests are applicable for type A and B, some are not applicable for type B. I need to be able to skip certain tests when pytest is run with the --type=B flag.
Here is my conftest.py file where I setup a fixture based on type
import pytest
#Enable type argument
def pytest_addoption(parser):
parser.addoption("--type", action="store", default="A", help = "Specify a content type, allowed values: A, B")
#pytest.fixture(scope="session", autouse=True)
def type(request):
if request.node.get_closest_marker('skipb') and request.config.getoption('--type') == 'B':
pytest.skip('This test is not valid for type B so it was skipped')
print("Is type B")
return request.config.getoption("--type")
Then, before my test function to be skipped I am adding the marker as follows:
class TestService1(object):
#pytest.mark.skipb()
def test_status(self, getResponse):
assert_that(getResponse.ok, "HTTP Request OK").is_true()
printResponse(getResponse)
class TestService2(object):
#pytest.mark.skipb()
def test_status(self, getResponse):
assert_that(getResponse.ok, "HTTP Request OK").is_true()
printResponse(getResponse)
I am able to run pytest and don't get any interpreter errors however it doesn't skip my test. Here is the command I use to run the test:
pytest -s --type=B
Update: I need to clarify that my tests are spread across multiple classes. Updated my code example to make this more clear.
We use something very similar to run an "extended parameter set". For this, you can use the following code:
In conftest.py:
def pytest_addoption(parser):
parser.addoption(
"--extended-parameter-set",
action="store_true",
default=False,
help="Run an extended set of parameters.")
def pytest_collection_modifyitems(config, items):
extended_parameter_set = config.getoption("--extended-parameter-set")
skip_extended_parameters = pytest.mark.skip(
reason="This parameter combination is part of the extended parameter "
"set.")
for item in items:
if (not extended_parameter_set
and "extended_parameter_set" in item.keywords):
item.add_marker(skip_extended_parameters)
Now, you can simply mark full tests or only some parametrizations of a test with "extended_parameter_set" and it will only be run if pytest is invoked with the --extended-parameter-set option
I am wondering if it is possible to override test specified in command line via +UVM_TESTNAME by +uvm_set_type_override.
I have tried it and this is what i see in prints in log.
UVM_INFO # 0: reporter [RNTST] Running test Test1...
UVM_INFO # 0: reporter [UVM_CMDLINE_PROC] Applying type override from the command line: +uvm_set_type_override=Test1,Test2
So it seems to me that test component is created first and then factory overrides are applied?
I see in uvm_root.svh following pieces of code
// if test now defined, create it using common factory
if (test_name != "") begin
if(m_children.exists("uvm_test_top")) begin
uvm_report_fatal("TTINST",
"An uvm_test_top already exists via a previous call to run_test", UVM_NONE);
#0; // forces shutdown because $finish is forked
end
$cast(uvm_test_top, factory.create_component_by_name(test_name,
"", "uvm_test_top", null));
It is using the factory, but i don't know if actully overrides are put in. I also see code in following.
begin
if(test_name=="")
uvm_report_info("RNTST", "Running test ...", UVM_LOW);
else if (test_name == uvm_test_top.get_type_name())
uvm_report_info("RNTST", {"Running test ",test_name,"..."}, UVM_LOW);
else
uvm_report_info("RNTST", {"Running test ",uvm_test_top.get_type_name()," (via factory override for test \"",test_name,"\")..."}, UVM_LOW);
end
I am wondering if the "else" part in above is ever executed? or under what condition is it executed?
It seems that there is an issue with command line processing ordering in the UVM—UVM_TESTNAME gets processed separate before all the other options.
It is possible to set an override before calling run_test() in the initial block.
But what is the point of setting up the test name, and then overriding it on the same command line? Why not use the overridden test name as the test?
In general, anything registered with the UVM Factory can be overridden at runtime with a runtime command line switch.
In the case of test names, there is a command line switch called +UVM_TESTNAME=selected_test_name_here.
Typically,
We may have the base test name as the default in the run(your_base_test_name) in the top module,
And then we can select various tests at runtime, without compiling as we run each test (as long as each test has been included in the compile
And the +UVM_TESTNAME=selected_test_at_runtime as we typically cycle through test names when running regressions or switching tests as we debug our design with each different test.