Karate- Gatling: Not able to run scenarios based on tags - scala

I am trying to run performance test on scenario tagged as perf from the below feature file-
#tag1 #tag2 #tag3
**background:**
user login
#tag4 #perf
**scenario1:**
#tag4
**scenario2:**
Below is my .scala file setup-
class PerfTest extends Simulation {
val protocol = karateProtocol()
val getTags = scenario("Name goes here").exec(karateFeature("classpath:filepath"))
setUp(
getTags.inject(
atOnceUsers(1)
).protocols(protocol)
)
I have tried passing the tags from command line and as well as passing the tag as argument in exec method in scala setup.
Terminal command-
mvn clean test-compile gatling:test "-Dkarate.env={env}" "-Dkarate.options= --tags #perf"
.scala update:- I have also tried passing the tag as an argument in the karate execute.
val getTags = scenario("Name goes here").exec(karateFeature("classpath:filepath", "#perf"))
Both scenarios are being executed with either approach. Any pointers how i can force only the test with tag perf to run?

I wanted to share the finding here. I realized it is working fine when i am passing the tag info in .scala file.
My scenario with perf tag was a combination of GET and POST call as i needed some data from GET call to pass in POST call. That's why i was seeing both calls when running performance test.
I did not find any reference in karate gatling documentation for passing tags in terminal execution command. So i am assuming that might not be a valid case.

Related

SCP command not working in karate project - it throws command error:cannot run program scp.exe: CreateProcess error=2 [duplicate]

I'm trying to execute bash script using karate. I'm able to execute the script from karate-config.js and also from .feature file. I'm also able to pass the arguments to the script.
The problem is, that if the script fails (exits with something else than 0) the test execution continues and finishes as succesfull.
I found out that when the script echo-es something then i can access it as a result of the script so I could possibly echo the exit value and do assertion on it (in some re-usable feature), but this seems like a workaround rather than a valid clean solution. Is there some clean way of accessing the exit code without echo-ing it? Am I missing on something?
script
#!/bin/bash
#possible solution
#echo 3
exit 3;
karate-config.js
var result = karate.exec('script.sh arg1')
feture file
def result = karate.exec('script.sh arg1')
Great timing. We very recently did some work for CLI testing which I am sure you can use effectively. Here is a thread on Twitter: https://twitter.com/maxandersen/status/1276431309276151814
And we have just released version 0.9.6.RC4 and new we have a new karate.fork() option that returns an instance of Command on which you can call exitCode
Here's an example:
* def proc = karate.fork('script.sh arg1')
* proc.waitSync()
* match proc.exitCode == 0
You can get more ideas here: https://github.com/intuit/karate/issues/1191#issuecomment-650087023
Note that the argument to karate.fork() can take multiple forms. If you are using karate.exec() (which will block until the process completes) the same arguments work.
string - full command line as seen above
string array - e.g. ['script.sh', 'arg1']
json where the keys can be
line - string (OR)
args - string array
env - optional environment properties (as JSON)
redirectErrorStream - boolean, true by default which means Sys.err appears in Sys.out
workingDir - working directory
useShell - default false, auto-prepend cmd /c or sh -c depending on OS
And since karate.fork() is async, you need to call waitSync() if needed as in the example above.
Do provide feedback and we can tweak further if needed.
EDIT: here's a very advanced example that shows how to listen to the process output / log, collect the log, and conditionally exit: fork-listener.feature
Another answer which can be a useful reference: Conditional match based on OS
And here's how to use cURL for advanced HTTP tests ! https://stackoverflow.com/a/73230200/143475
In case you need to do a lot of local file manipulation, you can use the karate.toJavaFile() utility so you can convert a relative path or a "prefixed" path to an absolute path.
* def file = karate.toJavaFile('classpath:some/file.txt')
* def path = file.getPath()

Unhelpful output from pytest

TLDR: How can I get better output from pytest?
I'm using Django with regular python3 unittests.
I've just switched to pytest-django for running tests.
pytest throws an error for almost all my tests (149 in total).
Pages and pages with this error.
self = <RegexURLResolver 'project.urls' (None:None) ^/>
#property
def reverse_dict(self):
language_code = get_language()
if language_code not in self._reverse_dict:
self._populate()
> return self._reverse_dict[language_code]
E KeyError: 'en-us'
Which wasn't the problem. It led me down to a wrong path.
I had a syntax error in one of my views.py files.
./manage.py test resulted in:
snip
File "/home/roland/project/views.py", line 20
code = zip(list1, list2])
SyntaxError: invalid syntax
Notice the last: ] which was the problem.
So: How can I get more useful output on problems when using pytest?
Btw:
After finding this and scrolling back into the pytest output there was mention of the syntax error. It was just buried in the output.
You can use the --maxfail=1 option so it will stop immediately on first failure.
Also, make sure your pytest.ini is setup properly so that pytest knows it should be using django-pyest.
[pytest]
DJANGO_SETTINGS_MODULE='myapp.settings'
For my workflow, I usually do the following:
run pytest --maxfail=1 myfile.py &> pytest-output.txt
tail, grep, or search he text file for errors.
Fix and iterate
There are a lot of other configuration options that will help you to get more meaningful input from pytest.

pytest implementing a logfile per test method

I would like to create a separate log file for each test method. And i would like to do this in the conftest.py file and pass the logfile instance to the test method. This way, whenever i log something in a test method it would log to a separate log file and will be very easy to analyse.
I tried the following.
Inside conftest.py file i added this:
logs_dir = pkg_resources.resource_filename("test_results", "logs")
def pytest_runtest_setup(item):
test_method_name = item.name
testpath = item.parent.name.strip('.py')
path = '%s/%s' % (logs_dir, testpath)
if not os.path.exists(path):
os.makedirs(path)
log = logger.make_logger(test_method_name, path) # Make logger takes care of creating the logfile and returns the python logging object.
The problem here is that pytest_runtest_setup does not have the ability to return anything to the test method. Atleast, i am not aware of it.
So, i thought of creating a fixture method inside the conftest.py file with scope="function" and call this fixture from the test methods. But, the fixture method does not know about the the Pytest.Item object. In case of pytest_runtest_setup method, it receives the item parameter and using that we are able to find out the test method name and test method path.
Please help!
I found this solution by researching further upon webh's answer. I tried to use pytest-logger but their file structure is very rigid and it was not really useful for me. I found this code working without any plugin. It is based on set_log_path, which is an experimental feature.
Pytest 6.1.1 and Python 3.8.4
# conftest.py
# Required modules
import pytest
from pathlib import Path
# Configure logging
#pytest.hookimpl(hookwrapper=True,tryfirst=True)
def pytest_runtest_setup(item):
config=item.config
logging_plugin=config.pluginmanager.get_plugin("logging-plugin")
filename=Path('pytest-logs', item._request.node.name+".log")
logging_plugin.set_log_path(str(filename))
yield
Notice that the use of Path can be substituted by os.path.join. Moreover, different tests can be set up in different folders and keep a record of all tests done historically by using a timestamp on the filename. One could use the following filename for example:
# conftest.py
# Required modules
import pytest
import datetime
from pathlib import Path
# Configure logging
#pytest.hookimpl(hookwrapper=True,tryfirst=True)
def pytest_runtest_setup(item):
...
filename=Path(
'pytest-logs',
item._request.node.name,
f"{datetime.datetime.now().strftime('%Y%m%dT%H%M%S')}.log"
)
...
Additionally, if one would like to modify the log format, one can change it in pytest configuration file as described in the documentation.
# pytest.ini
[pytest]
log_file_level = INFO
log_file_format = %(name)s [%(levelname)s]: %(message)
My first stackoverflow answer!
I found the answer i was looking for.
I was able to achieve it using the function scoped fixture like this:
#pytest.fixture(scope="function")
def log(request):
test_path = request.node.parent.name.strip(".py")
test_name = request.node.name
node_id = request.node.nodeid
log_file_path = '%s/%s' % (logs_dir, test_path)
if not os.path.exists(log_file_path):
os.makedirs(log_file_path)
logger_obj = logger.make_logger(test_name, log_file_path, node_id)
yield logger_obj
handlers = logger_obj.handlers
for handler in handlers:
handler.close()
logger_obj.removeHandler(handler)
In newer pytest version this can be achieved with set_log_path.
#pytest.fixture
def manage_logs(request, autouse=True):
"""Set log file name same as test name"""
request.config.pluginmanager.get_plugin("logging-plugin")\
.set_log_path(os.path.join('log', request.node.name + '.log'))

Failing maven-build when Gatling-test has too high fail-percentage

I am trying to run a simple performance-test through Gatling. I use maven to run the process. In order to easely pick up when changes in the code breaks my gatling-tests I want the maven-build to fail. I have made sure to add <failOnError>true</failOnError> in my pom.xml file.
My current script is something like this:
class MySim extends Simulation {
val httpProtocol = http.baseURL("http://localhost:8080")
val scn = scenario("Test")
.exec(http("request_1")
.get("""/helloworld""")
.check(status.is(200))
)
setUp(scn.inject(ramp(1 users) over (1 seconds))).protocols(httpProtocol)
}
I run the build using maven (with the gatling-maven-plugin) using mvn clean gatling:execute which will allways succeed. (even when the server isn't running). I am looking for a way to make sure the maven builds fails when gatling-test fails (or has a too high fail-percentage).
So I figured out a solution: All I had to do was add assertions to the setUp, with the criteria I wanted. So the following code would fail the maven-build if the successrate was lover than 90%.
setUp(scn.inject( ... ))
.protocols(httpProtocol)
.assertions(
global.successfulRequests.percent.greaterThan(90)
)
You have to use the Assertions API.

How to get test name and test result during run time in pytest

I want to get the test name and test result during runtime.
I have setup and tearDown methods in my script. In setup, I need to get the test name, and in tearDown I need to get the test result and test execution time.
Is there a way I can do this?
You can, using a hook.
I have these files in my test directory:
./rest/
├── conftest.py
├── __init__.py
└── test_rest_author.py
In test_rest_author.py I have three functions, startup, teardown and test_tc15, but I only want to show the result and name for test_tc15.
Create a conftest.py file if you don't have one yet and add this:
import pytest
from _pytest.runner import runtestprotocol
def pytest_runtest_protocol(item, nextitem):
reports = runtestprotocol(item, nextitem=nextitem)
for report in reports:
if report.when == 'call':
print '\n%s --- %s' % (item.name, report.outcome)
return True
The hook pytest_runtest_protocol implements the runtest_setup/call/teardown protocol for the given test item, including capturing exceptions and calling reporting hooks. It is called when any test finishes (like startup or teardown or your test).
If you run your script you can see the result and name of the test:
$ py.test ./rest/test_rest_author.py
====== test session starts ======
/test_rest_author.py::TestREST::test_tc15 PASSED
test_tc15 --- passed
======== 1 passed in 1.47 seconds =======
See also the docs on pytest hooks and conftest.py.
unittest.TestCase.id() this will return the complete Details including class name , method name .
From this we can extract test method name.
Getting the results during can be achieved by checking if there any exceptions in executing the test.
If the test fails then there wil be an exception if sys.exc_info() returns None then test is pass else test will be fail.
Using pytest_runtest_protocol as suggested with fixture marker solved my problem. In my case it was enough just to use reports = runtestprotocol(item, nextitem=nextitem) within my pytest html fixture. So to finalize the item element contains the information you need.
Many Thanks.