How to skip a test in pytest based on command-line option / fixture - pytest

I have a pytest suite containing multiple files that test web services. The tests can be run on different types, call them type A and type B and the user can specify which type the tests should be run for. While most tests are applicable for type A and B, some are not applicable for type B. I need to be able to skip certain tests when pytest is run with the --type=B flag.
Here is my conftest.py file where I setup a fixture based on type
import pytest
#Enable type argument
def pytest_addoption(parser):
parser.addoption("--type", action="store", default="A", help = "Specify a content type, allowed values: A, B")
#pytest.fixture(scope="session", autouse=True)
def type(request):
if request.node.get_closest_marker('skipb') and request.config.getoption('--type') == 'B':
pytest.skip('This test is not valid for type B so it was skipped')
print("Is type B")
return request.config.getoption("--type")
Then, before my test function to be skipped I am adding the marker as follows:
class TestService1(object):
#pytest.mark.skipb()
def test_status(self, getResponse):
assert_that(getResponse.ok, "HTTP Request OK").is_true()
printResponse(getResponse)
class TestService2(object):
#pytest.mark.skipb()
def test_status(self, getResponse):
assert_that(getResponse.ok, "HTTP Request OK").is_true()
printResponse(getResponse)
I am able to run pytest and don't get any interpreter errors however it doesn't skip my test. Here is the command I use to run the test:
pytest -s --type=B
Update: I need to clarify that my tests are spread across multiple classes. Updated my code example to make this more clear.

We use something very similar to run an "extended parameter set". For this, you can use the following code:
In conftest.py:
def pytest_addoption(parser):
parser.addoption(
"--extended-parameter-set",
action="store_true",
default=False,
help="Run an extended set of parameters.")
def pytest_collection_modifyitems(config, items):
extended_parameter_set = config.getoption("--extended-parameter-set")
skip_extended_parameters = pytest.mark.skip(
reason="This parameter combination is part of the extended parameter "
"set.")
for item in items:
if (not extended_parameter_set
and "extended_parameter_set" in item.keywords):
item.add_marker(skip_extended_parameters)
Now, you can simply mark full tests or only some parametrizations of a test with "extended_parameter_set" and it will only be run if pytest is invoked with the --extended-parameter-set option

Related

how to negate a marker inside the test file

I'm running pytests on different controllers.
I have defined markers for each of my test targets
#pytest.mark.pcm33
#pytest.mark.pcm21
#pytest.mark.pcm62
#pytest.mark.pcm52
#pytest.mark.pcm61
def test_something():
So when I run pytest -m pcm62 the test test_something is executed.
Now I have a test which must be executed on all my controllers, except for pcm62.
How can I negate this, so the test is always executed, except for pcm62, something like this
#pytest.notmark.pcm62
def test_something_except_for_pcm62():
But all that I can do is this:
#pytest.mark.pcm33
#pytest.mark.pcm21
##pytest.mark.pcm62
#pytest.mark.pcm52
#pytest.mark.pcm61
def test_something_except_for_pcm62():
Notice that the more controllers I get to support, the longer my list of marker decorators gets.
pytest -m "not pcm62" does not do the trick because then I can not use the initial test_something in the same script.

Pytest + Appium test framework

I'm very new to automation development, and currently starting to write an appium+pytest based Android app testing framework.
I managed to run tests on a connected device using this code, that seems to use unittest:
class demo(unittest.TestCase):
reportDirectory = 'reports'
reportFormat = 'xml'
dc = {}
driver = None
# testName = 'test_setup_tmotg_demo'
def setUp(self):
self.dc['reportDirectory'] = self.reportDirectory
self.dc['reportFormat'] = self.reportFormat
# self.dc['testName'] = self.testName
self.dc['udid'] = 'RF8MA2GW1ZF'
self.dc['appPackage'] = 'com.tg17.ud.internal'
self.dc['appActivity'] = 'com.tg17.ud.ui.splash.SplashActivity'
self.dc['platformName'] = 'android'
self.dc['noReset'] = 'true'
self.driver = webdriver.Remote('http://localhost:4723/wd/hub',self.dc)
# def test_function1():
# code
# def test_function2():
# code
# def test_function3():
# code
# etc...
def tearDown(self):
self.driver.quit()
if __name__ == '__main__':
unittest.main()
As you can see all the functions are currently within 'demo' class.
The intention is to create several test cases for each part of the app (for example: registration, main screen, premium subscription, etc.). That could sum up to hundreds of test cases eventually.
It seems to me that simply continuing listing them all in this same class would be messy and would give me a very limited control. However I didn't find any other way to arrange my tests while keeping the device connected via appium.
The question is what would be the right way to organize the project so that I can:
Set up the device with appium server
Run all the test suites in sequential order (registration, main screen, subscription, etc...).
Perform the cleaning... export results, disconnect device, etc.
I hope I described the issue clearly enough. Would be happy to elaborate if needed.
Well you have a lot of questions here so it might be good to split them up into separate threads. But first of all you can learn a lot about how Appium works by checking out the documentation here. And for the unittest framework here.
All Appium cares about is the capabilities file (or variable). So you can either populate it manually or white some helper function to do that for you. Here is a list of what can be used.
You can create as many test classes(or suites) as you want and add them together in any order you wish. This helps to break things up into manageable chunks. (See example below)
You will have to create some helper methods here as well, since Appium itself will not do much cleaning. You can use the adb command in the shell for managing android devices.
import unittest
from unittest import TestCase
# Create a Base class for common methods
class BaseTest(unittest.TestCase):
# setUpClass method will only be ran once, and not every suite/test
#classmethod
def setUpClass(cls) -> None:
# Init your driver and read the capabilites here
pass
#classmethod
def tearDownClass(cls) -> None:
# Do cleanup, close the driver, ...
pass
# Use the BaseTest class from before
# You can then duplicate this class for other suites of tests
class TestLogin(BaseTest):
#classmethod
def setUpClass(cls) -> None:
super(TestLogin, cls).setUpClass()
# Do things here that are needed only once (like loging in)
def setUp(self) -> None:
# This is executed before every test
pass
def testOne(self):
# Write your tests here
pass
def testTwo(self):
# Write your tests here
pass
def tearDown(self) -> None:
# This is executed after every test
pass
if __name__ == '__main__':
# Load the tests from the suite class we created
test_cases = unittest.defaultTestLoader.loadTestsFromTestCase(TestLogin)
# If you want do add more
test_cases.addTests(TestSomethingElse)
# Run the actual tests
unittest.TextTestRunner().run(test_cases)

Elixir Postgres view returning empty dataset when testing

I am trying to test a view created in Postgres, but it is returning an empty result set. However, when testing out the view in an Elixir interactive shell, I get back the expected data. Here are the steps I have taken to create and test the view:
Create a migration:
def up do
execute """
CREATE VIEW example_view AS
...
Create the schema:
import Ecto.Changeset
schema "test_view" do
field(:user_id, :string)
Test:
describe "example loads" do
setup [
:with_example_data
]
test "view" do
query = from(ev in Schema.ExampleView)
IO.inspect Repo.all(query)
end
end
The response back is an empty array []
Is there a setting that I am missing to allow for views to be tested in test?
As pointed out in one of the comments:
iex, mix phx.server... run on the :dev environment and the dev DB
tests use the :test environment and runs on a separate DB
It actually makes a lot of sense because you want your test suite to be reproducible and independent of whatever records that you might create/edit in your dev env.
You can open iex in the :test environment to confirm that your query returns the empty array here too:
MIX_ENV=test iex -S mix
What you'll need is to populate your test DB with some known records before querying. There are at least 2 ways to achieve that: fixtures and seeds.
Fixtures:
define some helper functions to create records in test/support/test_helpers.ex (typically: takes some attrs, adds some defaults and calls some create_ function from your context)
def foo_fixture(attrs \\ %{}) do
{:ok, foo} =
attrs
|> Enum.into(%{name: "foo", bar: " default bar"})
|> MyContext.create_foo()
foo
end
call them within your setup function or test case before querying
side note: you should use DataCase for tests involving the DB. With DataCase, each test is wrapped in its own transaction and any fixture that you created will be rollback-ed at the end of the test, so tests are isolated and independent from each other.
Seeds:
If you want to include some "long-lasting" records as part of your "default state" (e.g. for a list of countries, categories...), you could define some seeds in priv/repo/seeds.exs.
The file should have been created by the phoenix generator and indicate you how to add seeds (typically use Repo.insert!/1)
By default, mix will run those seeds whenever you run mix ecto.setup or mix ecto.reset just after your migrations (whatever the env used)
To apply any changes in seeds.exs, you can run the following:
# reset dev DB
mix ecto.reset
# reset test DB
MIX_ENV=test mix ecto.reset
If you need some seeds to be environment specific, you can always introduce different seed files (e.g. dev_seeds.exs) and modify your mix.exs to configure ecto.setup.
Seeds can be very helpful not only for tests but for dev/staging in the early stage of a project, while you are still tinkering a lot with your schema and you are dropping the DB frequently.
I usually find myself using a mix of both approaches.

Using the same object from different PyTest testfiles?

im working with pytest right know. My Problem is that I need to use the same object generated in one test_file1.py in another test_file2.py which are in two different directories and invoked separately from another.
Heres the code:
$ testhandler.py
# Starts the first testcases
returnValue = pytest.main(["-x", "--alluredir=%s" % test1_path, "--junitxml=%s" % test1_path+"\\JunitOut_test1.xml", test_file1])
# Starts the second testcases
pytest.main(["--alluredir=%s" % test2_path, "--junitxml=%s" % test2_path+"\\JunitOut_test2.xml", test_file2])
As you can see the first one is critical, therefore I start it with -x to interrupt if there is an error. And --alluredir deletes the target directory before starting the new tests. Thats why I decided to invoke pytest twice in my testhandler.py (moreoften in the future maybe)
Here is are the test_files:
$ test1_directory/test_file1.py
#pytest.fixture(scope='session')
def object():
# Generate reusable object from another file
def test_use_object(object):
# use the object generated above
Note that the object is actually a class with parameters and functions.
$ test2_directory/test_file2.py
def test_use_object_from_file1():
# reuse the object
I tried to generate the object in the testhandler.py file and importing it to both testfiles. The problem was that the object was not excatly the same as in the testhandler.py or test_file1.py.
My question is now if there is a possibility to use excatly that one generated object. Maybe with a global conftest.py or something like that.
Thank you for your time!
By exactly the same you mean a similar object, right? The only way to do this is to marshal it in the first process and unmarshal it in the other process. One way to do it is by using json or pickle as marshaller, and pass the filename to use for the json/pickle file to be able to read the object back.
Here's some sample code, untested:
# conftest.py
def pytest_addoption(parser):
parser.addoption("--marshalfile", help="file name to transfer files between processes")
#pytest.fixture(scope='session')
def object(request):
filename = request.getoption('marshalfile')
if filename is None:
raise pytest.UsageError('--marshalfile required')
# dump object
if not os.path.isfile(filename):
obj = create_expensive_object()
with open(filename, 'wb') as f:
pickle.dump(f, obj)
else:
# load object, hopefully in the other process
with open(filename, 'rb') as f:
obj = pickle.load(f)
return obj

How to get PyTest fixtures to autocomplete in PyCharm (type hinting)

I had a bear of a time figuring this out, and it was really bugging me, so I thought I'd post this here in case anyone hit the same problem...
(and the answer is so dang simple it hurts :-)
The Problem
The core of the issue is that sometimes, not always, when dealing with fixtures in PyTest that return objects, when you use those fixtures in a test in PyCharm, you don't get autocomplete hints. If you have objects with large numbers of methods you want to reference while writing a test, this can add a lot of overhead and inconvenience to the test writing process.
Here's a simple example to illustrate the issue:
Let's say I've got a class "event_manager" that lives in:
location.game.events
Let's further say that in my conftest.py file (PyTest standard thing for the unfamiliar), I've got a fixture that returns an instance of that class:
from location.game.events import event_manager
...
#pytest.fixture(scope="module")
def event_mgr():
"""Creates a new instance of event generate for use in tests"""
return event_manager()
I've had issues sometimes, (but not always - I can't quite figure out why) with classes like this where autocomplete will not work properly in the test code where I use the fixture, e.g.
def test_tc10657(self, evt_mgr):
"""Generates a Regmod and expects filemod to be searchable on server"""
evt_mgr.(This does not offer autocomplete hints when you type ".")
So the answer is actually quite simple, once you review type hinting in PyCharm:
http://www.jetbrains.com/help/pycharm/2016.1/type-hinting-in-pycharm.html
Here's how to fix the above test code so that autocomplete works properly:
from location.game.events import event_manager
...
def test_tc10657(self, evt_mgr: event_manager):
"""Generates a Regmod and expects filemod to be searchable on server"""
evt_mgr.(This DOES offer hints when you type "." Yay!)
Notice how I explicitly type the fixture as an input parameter of type event_manager.
Also if you add a docstring to a function and specify the type of the the parameters, you will get the code completion for those parameters.
For example using pytest and Selenium:
# The remote webdriver seems to be the base class for the other webdrivers
from selenium.webdriver.remote.webdriver import WebDriver
def test_url(url, browser_driver):
"""
This method is used to see if IBM is in the URL title
:param WebDriver browser_driver: The browser's driver
:param str url: the URL to test
"""
browser_driver.get(url)
assert "IBM" in browser_driver.title
Here's my conftest.py file as well
import pytest
from selenium import webdriver
# Method to handle the command line arguments for pytest
def pytest_addoption(parser):
parser.addoption("--driver", action="store", default="chrome", help="Type in browser type")
parser.addoption("--url", action="store", default='https://www.ibm.com', help="url")
#pytest.fixture(scope='module', autouse=True)
def browser_driver(request):
browser = request.config.getoption("--driver").lower()
# yield the driver to the specified browser
if browser == "chrome":
driver = webdriver.Chrome(executable_path='/path/to/chromedriver')
else:
raise Exception("No driver for browser " + browser)
yield driver
driver.quit()
#pytest.fixture(scope="module")
def url(request):
return request.config.getoption("--url")
Tested using Python 2.7 and PyCharm 2017.1. The docstring format is reStructuredText and the "Analyze Python code in docstrings" checkbox is checked in settings.