Collect py.test tests info with markers - pytest

I'm using py.test and I want to get the list of tests that I have with marker info included.
When I use the --collect-only flag I get the test functions. Is there a way to get the assigned markers for each test also?
Based on Frank T's answer I created a workaround code sample:
from _pytest.mark import MarkInfo, MarkDecorator
import json
def pytest_addoption(parser):
parser.addoption(
'--collect-only-with-markers',
action='store_true',
help='Collect the tests with marker information without executing them'
)
def pytest_collection_modifyitems(session, config, items):
if config.getoption('--collect-only-with-markers'):
for item in items:
data = {}
# Collect some general information
if item.cls:
data['class'] = item.cls.__name__
data['name'] = item.name
if item.originalname:
data['originalname'] = item.originalname
data['file'] = item.location[0]
# Get the marker information
for key, value in item.keywords.items():
if isinstance(value, (MarkDecorator, MarkInfo)):
if 'marks' not in data:
data['marks'] = []
data['marks'].append(key)
print(json.dumps(data))
# Remove all items (we don't want to execute the tests)
items.clear()

I don't think pytest has built-in behavior to list test functions along with the marker information for those tests. A --markers command lists all registered markers, but that's not what you want. I briefly looked over the list of pytest plugins and didn't see anything that looked relevant.
You can write your own pytest plugin to list tests along with marker info. Here is documentation on writing a pytest plugin.
I would try using the "pytest_collection_modifyitems" hook. It is passed a list of all tests that are collected, and it doesn't need to modify them. (Here is a list of all hooks.)
The tests passed in to that hook have a get_marker() method if you know the name of the marker you're looking for (see this code for example). When I was looking through that code, I could not find an official API for listing all markers. I found this to get the job done: test.keywords.__dict__['_markers'] (see here and here).

You can find markers by a name attribute in the request.function.pytestmark object
#pytest.mark.scenarious1
#pytest.mark.scenarious2
#pytest.mark.scenarious3
def test_sample():
pass
#pytest.fixture(scope='function',autouse=True)
def get_markers():
print([marker.name for marker in request.function.pytestmark])
>>> ['scenarious3', 'scenarious2', 'scenarious1']
Note, that they were listed in the reversed order by default.

Related

Unknown marker with pytest-bdd only when parameter is declared

When I declare a marker in pytest.ini having a parameter, this is not recognized in pytest-bdd feature file. Markers without parameters seem to work fine.
[pytest]
markers =
swr(issue1): link to Software Requirement
smoke: Smoke Test component
Simple feature file works fine with #smoke:
Feature: Trivial Example
#smoke
Scenario: Add a number to another number
Given 7 is set
When 9 is added
Then new value is 16
Fails with #swr("123"):
Feature: Trivial Example
#swr("123")
Scenario: Add a number to another number
Given 7 is set
When 9 is added
Then new value is 16
Failure is a warning:
../../../../../.local/lib/python3.10/site-packages/pytest_bdd/plugin.py:127
/home/parallels/.local/lib/python3.10/site-packages/pytest_bdd/plugin.py:127: PytestUnknownMarkWarning: Unknown pytest.mark.swr("123") - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/how-to/mark.html
mark = getattr(pytest.mark, tag)
Taking a look in the known issues in the repository I stumbled upon something that is related. The developer mentions there is a hook available which is seen here.
In the conftest.py we can then do the following:
from typing import Callable, cast
import pytest
import ast
def pytest_bdd_apply_tag(tag: str, function) -> Callable:
tree = ast.parse(tag)
body = tree.body[0].value
if isinstance(body, ast.Call):
name = body.func.id
arg = body.args[0].value
mark = getattr(pytest.mark, name).with_args(arg)
else:
mark = getattr(pytest.mark, tag)
marked = mark(function)
return cast(Callable, marked)
Then we can just register the marker as swr and the hook should automatically parametrize the function as needed. It uses ast to parse the marker and dynamically create the new marker. Shown below is what mark looks like when running with swr or swr("123").
platform darwin -- Python 3.9.6, pytest-7.2.0, pluggy-1.0.0
rootdir: ***, configfile: pytest.ini
plugins: bdd-6.1.1
collecting ...
MarkDecorator(mark=Mark(name='swr', args=('123',), kwargs={}))
collected 1 item
platform darwin -- Python 3.9.6, pytest-7.2.0, pluggy-1.0.0
rootdir: ***, configfile: pytest.ini
plugins: bdd-6.1.1
collecting ...
MarkDecorator(mark=Mark(name='swr', args=(), kwargs={}))
collected 1 item
Take note of MarkDecorator in the output for each of the calls.

Stop huge error output from testing-library

I love testing-library, have used it a lot in a React project, and I'm trying to use it in an Angular project now - but I've always struggled with the enormous error output, including the HTML text of the render. Not only is this not usually helpful (I couldn't find an element, here's the HTML where it isn't); but it gets truncated, often before the interesting line if you're running in debug mode.
I simply added it as a library alongside the standard Angular Karma+Jasmine setup.
I'm sure you could say the components I'm testing are too large if the HTML output causes my console window to spool for ages, but I have a lot of integration tests in Protractor, and they are SO SLOW :(.
I would say the best solution would be to use the configure method and pass a custom function for getElementError which does what you want.
You can read about configuration here: https://testing-library.com/docs/dom-testing-library/api-configuration
An example of this might look like:
configure({
getElementError: (message: string, container) => {
const error = new Error(message);
error.name = 'TestingLibraryElementError';
error.stack = null;
return error;
},
});
You can then put this in any single test file or use Jest's setupFiles or setupFilesAfterEnv config options to have it run globally.
I am assuming you running jest with rtl in your project.
I personally wouldn't turn it off as it's there to help us, but everyone has a way so if you have your reasons, then fair enough.
1. If you want to disable errors for a specific test, you can mock the console.error.
it('disable error example', () => {
const errorObject = console.error; //store the state of the object
console.error = jest.fn(); // mock the object
// code
//assertion (expect)
console.error = errorObject; // assign it back so you can use it in the next test
});
2. If you want to silence it for all the test, you could use the jest --silent CLI option. Check the docs
The above might even disable the DOM printing that is done by rtl, I am not sure as I haven't tried this, but if you look at the docs I linked, it says
"Prevent tests from printing messages through the console."
Now you almost certainly have everything disabled except the DOM recommendations if the above doesn't work. On that case you might look into react-testing-library's source code and find out what is used for those print statements. Is it a console.log? is it a console.warn? When you got that, just mock it out like option 1 above.
UPDATE
After some digging, I found out that all testing-library DOM printing is built on prettyDOM();
While prettyDOM() can't be disabled you can limit the number of lines to 0, and that would just give you the error message and three dots ... below the message.
Here is an example printout, I messed around with:
TestingLibraryElementError: Unable to find an element with the text: Hello ther. This could be because the text is broken up by multiple elements. In this case, you can provide a function for your text matcher to make your matcher more flexible.
...
All you need to do is to pass in an environment variable before executing your test suite, so for example with an npm script it would look like:
DEBUG_PRINT_LIMIT=0 npm run test
Here is the doc
UPDATE 2:
As per the OP's FR on github this can also be achieved without injecting in a global variable to limit the PrettyDOM line output (in case if it's used elsewhere). The getElementError config option need to be changed:
dom-testing-library/src/config.js
// called when getBy* queries fail. (message, container) => Error
getElementError(message, container) {
const error = new Error(
[message, prettyDOM(container)].filter(Boolean).join('\n\n'),
)
error.name = 'TestingLibraryElementError'
return error
},
The callstack can also be removed
You can change how the message is built by setting the DOM testing library message building function with config. In my Angular project I added this to test.js:
configure({
getElementError: (message: string, container) => {
const error = new Error(message);
error.name = 'TestingLibraryElementError';
error.stack = null;
return error;
},
});
This was answered here: https://github.com/testing-library/dom-testing-library/issues/773 by https://github.com/wyze.

Passing In Config In Gatling Tests

Noob to Gatling/Scala here.
This might be a bit of a silly question but I haven't been able to find an example of what I am trying to do.
I want to pass in things such as the baseURL, username and passwords for some of my calls. This would change from env to env, so I want to be able to change these values between the envs but still have the same tests in each.
I know we can feed in values but it appears that more for iterating over datasets and not so much for passing in the config values like I have.
Ideally I would like to house this information in a JSON file and not pass it in on the command line, but maybe thats not doable?
Any guidance on this would be awesome.
I have a similar setup and you can use pure scala here .In this scenario you can create an object called Config for eg
object Configuration { var INPUT_PROFILE_FILE_NAME = ""; }
This class can also read a file , I have the below code in the above object
val file = getClass.getResource("data/config.properties").getFile()
val prop = new Properties()
prop.load(new FileInputStream(file));
INPUT_PROFILE_FILE_NAME = prop.getProperty("inputProfileFileName")
Now you can import this object in Gattling Simulation File
val profileName= Configuration.INPUT_PROFILE_FILE_NAME ;
https://docs.scala-lang.org/tutorials/tour/singleton-objects.html.html

Python w/QT Creator form - Possible to grab multiple values?

I'm surprised to not find a previous question about this, but I did give an honest try before posting.
I've created a ui with Qt Creator which contains quite a few QtWidgets of type QLineEdit, QTextEdit, and QCheckbox. I've used pyuic5 to convert to a .py file for use in a small python app. I've successfully got the form connected and working, but this is my first time using python with forms.
I'm searching to see if there is a built-in function or object that would allow me to pull the ObjectNames and Values of all widgets contained within the GUI form and store them in a dictionary with associated keys:values, because I need to send off the information for post-processing.
I guess something like this would work manually:
...
dict = []
dict['checkboxName1'] = self.checkboxName1.isChecked()
dict['checkboxName2'] = self.checkboxName2.isChecked()
dict['checkboxName3'] = self.checkboxName3.isChecked()
dict['checkboxName4'] = self.checkboxName4.isChecked()
dict['lineEditName1'] = self.lineEditName1.text()
... and on and on
But is there a way to grab all the objects and loop through them, even if each different type (i.e. checkboxes, lineedits, etc) needs to be done separately?
I hope I've explained that clearly.
Thank you.
Finally got it working. Couldn't find a python specific example anywhere, so through trial and error this worked perfectly. I'm including the entire working code of a .py file that can generate a list of all QCheckBox objectNames on a properly referenced form.
I named my form main_form.ui from within Qt Creator. I then converted it into a .py file with pyuic5
pyuic5 main_form.ui -o main_form.py
This is the contents of a sandbox.py file:
from PyQt5 import QtCore, QtGui, QtWidgets
import sys
import main_form
# the name of my Qt Creator .ui form converted to main_form.py with pyuic5
# pyuic5 original_form_name_in_creator.ui -o main_form.py
class MainApp(QtWidgets.QMainWindow, main_form.Ui_MainWindow):
def __init__(self):
super(self.__class__, self).__init__()
self.setupUi(self)
# Push button object on main_form named btn_test
self.btn_test.clicked.connect(self.runTest)
def runTest(self):
# I believe this creates a List of all QCheckBox objects on entire UI page
c = self.findChildren(QtWidgets.QCheckBox)
# This is just to show how to access objectName property as an example
for box in c:
print(box.objectName())
def main():
app = QtWidgets.QApplication(sys.argv) # A new instance of QApplication
form = MainApp() # We set the form to be our ExampleApp (design)
form.show() # Show the form
app.exec_() # and execute the app
if __name__ == '__main__': # if we're running file directly and not importing it
main() # run the main function
See QObject::findChildren()
In C++ the template argument would allow one to specify which type of widget to retrieve, e.g. to just retrieve the QLineEdit objects, but I don't know if or how that is mapped into Python.
Might need to retrieve all types and then switch handling while iterating over the resulting list.

How do I get a pythonic list of all the pytest tests in a folder?

Something like --collect, only from python and not cmd, that returns a list of paths. I tried to see how pytest does it and I can't seem ro find it.
Thanks!
All collected tests would be stored as attribute items of session.
You can access session object by
session level fixture
pytest plugins, for example: pytest_runtestloop or pytest_sessionstart
Example:
#pytest.fixture(scope='session', autouse=True)
def get_all_tests(request):
items = request.session.items
all_tests_names = [item.name for item in items]
all_tests_locations = [item.location for item in items]
# location is a tuple of (file_path, linenumber, Classname.methodname)
If you want more info about object session or item, of cause you can read the docs or source code, but I prefer to use pdb.set_trace to dig into the object.