I need to add multiple contains assertion in my REST API test Steps using Groovy Script - rest

I have to check ever parameter of my response using Contains Assertion, there are approx 20 parameter. I added few using setToken(), but it's difficult to write this for 20 times. Is there any other method using which I can add all parameter in few lines of code?
I have written below code for adding few assertions
def addassert = testSteps.addassertion("Contains")
addassert.setToken("totalCases")
addassert.setName("Total Cases")
addassert = testSteps.addassertion("Contains")
addassert.setToken("numberofChains")
addassert.setName("No Of Chains")
addassert = testSteps.addassertion("Contains")
addassert.setToken("numberofOutlets")
addassert.setName("No Of Outlets")
addassert = testSteps.addassertion("Contains")
addassert.setToken("expiringToday")
addassert.setName("Exp Today")

Related

How to repeat each test with a delay if a particular Exception happens (pytest)

I have a load of test which I want to rerun if there is a particular exception. The reason for this is that I am running real API calls to a server and sometimes I hit the rate limit for the API, in which case I want to wait and try again.
However, I am also using a pytest fixture to make each test is run several times, because I am sending requests to different servers (the actual use case is different cryptocurrency exchanges).
Using pytest-rerunfailures comes very close to what I need...apart from that I can't see how to look at the exception of the last test run in the condition.
Below is some code which shows what I am trying to achieve, but I don't want to write code like this for every test obviously.
#pytest_asyncio.fixture(
params=EXCHANGE_NAMES,
)
async def client(request):
exchange_name = request.param
exchange_client = get_exchange_client(exchange_name)
return exchange_client
def test_something(client):
test_something.count += 1
### This block is the code I want to
try:
result = client.do_something()
except RateLimitException:
test_something.count
if test_something.count <= 3:
sleep_duration = get_sleep_duration(client)
time.sleep(sleep_duration)
# run the same test again
test_something()
else:
raise
expected = [1,2,3]
assert result == expected
You can use the retry library to wrap your actual code in:
#pytest_asyncio.fixture(
params=EXCHANGE_NAMES,
autouse=True,
)
async def client(request):
exchange_name = request.param
exchange_client = get_exchange_client(exchange_name)
return exchange_client
def test_something(client):
actual_test_something(client)
#retry(RateLimitException, tries=3, delay=2)
def actual_test_something(client):
'''Retry on RateLimitException, raise error after 3 attempts, sleep 2 seconds between attempts.'''
result = client.do_something()
expected = [1,2,3]
assert result == expected
The code looks much cleaner this way.

How to add random integer for each case in Schemathesis?

I'm trying to test an API endpoint with random input for mid and cids (code below). However, whenever I run the test it says missing required positional arguments. Can anyone please help?
#schema.parametrize()
#schema.given(mid=st.integers(), cids=st.lists(st.integers()))
#settings(max_examples=1)
def test_api_customised(mid, cids, case):
case.headers = case.headers or {}
case.headers['Authorization'] = "apiKey " + str(base64_composer)
case.headers['Content-Type'] = "application/json"
# CREATE JOB
if case.method == "POST":
if isinstance(case.body, dict):
case.body['moduleId'] = mid
case.body['clientIds'] = cids
print(case.body)
response = case.call()
case.validate_response(response)
And I got this error:
TypeError: test_api_customised() missing 2 required positional arguments: 'mid' and 'cids'
It is likely caused by the presence of explicit examples in the API schema. See this issue.
A temporary solution would be to exclude the explicit phase:
... # Skipped for brevity
from hypothesis import settings, Phase
# Used in `#settings`
PHASES = phases=set(Phase) - {Phase.explicit}
#schema.parametrize()
#schema.given(mid=st.integers(), cids=st.lists(st.integers()))
#settings(max_examples=1, phases=PHASES)
def test_api_customised(mid, cids, case):
... # the rest of the test
A more comprehensive solution requires changes in Schemathesis (see this issue)
You could check whether it is the case for you by removing examples / example / x-examples / x-example keywords (depending on your API spec version) from the API schema. If it is not the case, I encourage you to report this issue with more details (preferably including your API schema).

Count redirects in jmeter

At the moment im usig HTTP request sampler with 'Follow Redirects' enabled and want to keep it that way. As a secondary check besides assertion i want to count the number of redirects as well, but i dont want to implement this solution.
Is there a way when i can use only 1 HTTP sampler and a postprocessor (beanshell for now) and fetch this information? Im checking SamplerResult documentation , but cant find any method which would give back this information for me.
I heard Groovy is new black moreover users are encouraged to use JSR223 Test Elements and __groovy() function since JMeter 3.1 as Beanshell performs not that well so you can count the redirects as follows:
Add JSR223 PostProcessor as a child of your HTTP Request sampler
Put the following code into "Script" area:
int redirects = 0;
def range = new IntRange(false, 299, 400)
prev.getSubResults().each {
if (range.contains(it.getResponseCode() as int)) {
redirects++;
}
}
log.info('Redirects: ' + redirects)
Once you run your test you will be able to see the number of occurred redirects in jmeter.log file:
Add the following Regular Expression Extractor as a child of your sampler:
Apply to: Main sample and sub-samples
Field to check: Response code
Regular Expression: (\d+)
Template: $1$
Match No.: -1
Then add a BeanShell Post Processor also as a child of the sampler and add the following to the script area:
int matchNr = Integer.parseInt(vars.get("MyVar_matchNr"));// MyVar is the name of the variable of the above regular expression extractor
int counter = 0;
for(i=1; i <= matchNr; i++){
String x = vars.get("MyVar_"+i);
if(x.equals("302")){
counter = counter + 1;
}}
log.info(Label + ": Number of redirects = " + String.valueOf(counter));// The output will be printed in the log like this(BeanShell PostProcessor: Number of redirects = 3 ) so you might want to change the name of the beanshell post processor to the same name of your sampler.
Then you can see the number of redirects for the sampler in the log.

Pytest - skip (xfail) mixed with parametrize

is there a way to use the #incremental plugin like described att Pytest: how to skip the rest of tests in the class if one has failed? mixed with #pytest.mark.parametrize like below:
#pytest.mark.incremental
Class TestClass:
#pytest.mark.parametrize("input", data)
def test_preprocess_check(self,input):
# prerequisite for test
#pytest.mark.parametrize("input",data)
def test_process_check(self,input):
# test only if test_preprocess_check succeed
The problem i encountered is, at the first fail of test_preprocess_check with a given input of my data set, the following test_preprocess_check and test_process_check are labeled "xfail".
The behaviour i expect will be, at each new "input" of my parametrized data set, the test will act in an incremental fashion.
ex: data = [0,1,2]
if only test_preprocess_check(0) failed:
i got the following report:
1 failed, 5 xfailed
but i expect the report:
1 failed, 1 xfailed, 4 passed
Thanks
After some experiments i found a way to generalize the #incremental to works with parametrize annotation. Simply rewrite the _previousfailed argument to make it unique for each input. The argument _genid was excactly the need.
I added a #pytest.mark.incrementalparam to achieve this.
Code become:
def pytest_runtest_setup(item):
previousfailed_attr = getattr(item, "_genid",None)
if previousfailed_attr is not None:
previousfailed = getattr(item.parent, previousfailed_attr, None)
if previousfailed is not None:
pytest.xfail("previous test failed (%s)" %previousfailed.name)
previousfailed = getattr(item.parent, "_previousfailed", None)
if previousfailed is not None:
pytest.xfail("previous test failed (%s)" %previousfailed.name)
def pytest_runtest_makereport(item, call):
if "incrementalparam" in item.keywords:
if call.excinfo is not None:
previousfailed_attr = item._genid
setattr(item.parent,previousfailed_attr, item)
if "incremental" in item.keywords:
if call.excinfo is not None:
parent = item.parent
parent._previousfailed = item
It's interesting to mention that's it can't be used without parametrize cause parametrize annotation creates automatically _genid variable.
Hope this can helps others than me.

Showing test count in buildbot

I am not particularly happy about the stats that Buildbot provides. I understand that it is for building and not testing - that's why it has a concept of Steps, but no concept of Test. Still there are many cases when you need test statistics from build results. For example when comparing skipped and failed tests on different platforms to estimate the impact of a change.
So, what is needed to make Buildbot display test count in results?
What is the most simple way, so that a person who don't know anything about Buildbot can do this in 15 minutes?
Depending how you want to process the test results and how the test results are presented, Buildbot does provide a Test step, buildbot.steps.shell.Test
An example of how I use it for my build environment:
from buildbot.steps import shell
class CustomStepResult(shell.Test):
description = 'Analyzing results'
descriptionDone = 'Results analyzed'
def __init__(self, log_file = None, *args, **kwargs):
self._log_file = log_file
shell.Test.__init__(self, *args, **kwargs)
self.addFactoryArguments(log_file = log_file)
def start(self):
if not os.path.exists(self._log_file):
self.finished(results.FAILURE)
self.step_status.setText('TestResult XML file not found !')
else:
import xml.etree.ElementTree as etree
tree = etree.parse(self._log_file)
root = tree.getroot()
passing = len(root.findall('./testsuite/testcase/success'))
skipped = len(root.findall('./testsuite/testcase/skip'))
fails = len(root.findall('./testsuite/error')) + len(root.findall('./testsuite/testcase/error')) + len(root.findall('./testsuite/testcase/failure'))
self.setTestResults(total = fails+passing+skipped, failed = fails, passed = passing)
## the final status for WARNINGS is green but the step itself will be orange
self.finished(results.SUCCESS if fails == 0 else results.WARNINGS)
self.step_status.setText(self.describe(True))
And in the configuration factory I create a step as below:
factory.addStep(CustomStepResult(log_file = log_file))
Basically I override the default Test shell step and pass a custom XML file which contains my test results. I then look for the pass/fail/skip result nodes and accordingly display the results in the waterfall.