The test below checks the performance of a graphql endpoint. A CSV file of id's is fed into the test. When run, about 1% of the requests fail because the endpoint returns an error for some of the id's fed in. But, the message returned from graphql is not very descriptive, so I have no idea which id's actually failed. I'd like to be able to add a step to the test which logs the request body and response for all the failed requests.
I could enable the debug log but this will log everything. I'm only interested in logging the requests which fail. Is it possible to add something like a on failure step which would let me log out the request body and response so that I know which id's failed?
class Test extends CommonSimulation {
val graphqlQuery: String =
"""
|{"query":"{person(personId:\"${id}\")}"}
|""".stripMargin
val gqsPerson: ScenarioBuilder = scenario("Service Test")
.feed(csv(Data.getPath + "id.csv").random)
.exec(http("My test")
.post("https://localhost:4000/graphql")
.body(StringBody(graphqlQuery)).asJson
.check(jsonPath("$.errors").notExists)
.headers(headers)
)
setUp(
authToken.inject(atOnceUsers(1))
.andThen(
gqsPerson.inject(constantConcurrentUsers(1) during 1)
))
}
Please have a look at the documentation: https://gatling.io/docs/gatling/guides/debugging/#logback
Related
All other questions seem to address getting of Spark applicationId. I want to cancel the spark job programmatically which requires jobId.
spark.sparkContext.cancelJob(jobId)
Similar to the following way.
sc.applicationId
You can use below code logic for this use case.
Step-01: getting the job details .
import requests
import json
class BearerAuth(requests.auth.AuthBase):
def __init__(self, token):
self.token = token
def __call__(self, r):
r.headers["authorization"] = "Bearer " + self.token
return r
response = requests.get('https://databricksinstance/api/2.0/jobs/list', auth=BearerAuth('token')).json()
print(response)
Step-02: cancelling the job rest api call
same code , just change the URL as like this
https://<databricks-instance>/api/2.1/jobs/runs/cancel
ref: link
The spark status tracker is meant for monitoring job and stage progress.
In your case you could fetch all active job ids:
sc.statusTracker.getActiveJobIds
The official scala doc
I am new to scala and Gatling.
When Gatling does a check for a status 200 I want to include a variable onlineID into the logs so that I know which user had an issue.
object MyRequests{
val getAddressForOnlineId = feed(Configuration.csvFeeder)
.exec(
http("Abfrage von Adressdaten")
.get(Configuration.baseUrl + "/myrequest/${myonlineID}" )
.headers(Configuration.globalHeaders)
.check(status.is(200))
)
How can I do this?
Save the status in your check with saveAs and then in an exec(function) block, extract the status and myonlineID values from the Session and print into your own file or slf4j logger when status is not 200.
I recommend you have a look at the official documentation and Gatling Academy.
I'm load testing a local API that will redirect a user based on a few conditions. Locust is not redirecting the simulated users hitting the end points and I know this because the app logs all redirects. If I manually hit the end points using curl, I can see the status is 302 and the Location header is set.
According to the embedded clients.HttpSession.request object, the allow_redirects option is set to True by default.
Any ideas?
We use redirection in our locust test, especially during the login phase. The redirects are handled for us without a hitch. Print the status_code of the response that you get back. Is it 200, 3xx or something worse?
Another suggestion: Don't throw your entire testing workflow into the locust file. That makes it too difficult to debug problems. Instead, create a standalone python script that uses the python requests library directly to simulate your workflow. Iron out any kinks, like redirection problems, in that simple, non-locust test script. Once you have that working, extract what you did into a file or class and have the locust task use the class.
Here is an example of what I mean. FooApplication does the real work. He is consumed by the locust file and a simple test script.
foo_app.py
class FooApplication():
def __init__(self, client):
self.client = client
self.is_logged_in = False
def login(self):
self.client.cookies.clear()
self.is_logged_in = False
name = '/login'
response = self.client.post('/login', {
'user': 'testuser',
'password': '12345'
}, allow_redirects=True, name=name)
if not response.ok:
self.log_failure('Login failed', name, response)
def load_foo(self):
name = '/foo'
response = self.client.get('/foo', name=name)
if not response.ok:
self.log_failure('Foo request failed ', name, response)
def log_failure(self, message, name, response):
pass # add some logging
foo_test_client.py
# Use this test file to iron out kinks in your request workflow
import requests
from locust.clients import HttpSession
from foo_app import FooApplication
client = HttpSession('http://dev.foo.com')
app = FooApplication(client)
app.login()
app.load_foo()
locustfile.py
from foo_app import FooApplication
class FooTaskSet(TaskSet):
def on_start(self):
self.foo = FooApplication(self.client)
#task(1)
def login(self):
if not self.foo.is_logged_in:
self.foo.login()
#task(5) # 5x more likely to load a foo vs logging in again
def load_foo(self):
if self.foo.is_logged_in:
self.load_foo()
else:
self.login()
Since Locust uses the Requests HTTP library for Python, you might find your answer there.
The Response object can be used to evaluate if a redirect has happened and what the history of redirects contains.
is_redirect:
True if this Response is a well-formed HTTP redirect that could have been
processed automatically (by Session.resolve_redirects).
There might be an indication that the redirect is not well-formed.
I am very new to SoapUI, and need some help,
I want to test multiple SOAP requests - just kind of smoke test and save result in text file along with error message of failed response.
I have created testcase with all SOAP request and used invalid HTTP assertion to validate pass/fail one [as mentioned just smoke test] and wirting result in to txt file using tear down script.
I have created Custom properties for each SOAP request and using property transfer step fetched Error message & reporting entity.
Now my concern is, how to write those property values in txt file along with result.
I am using below mentioned teardown script to store result.
`import org.codehaus.groovy.scriptom.*
import org.codehaus.groovy.scriptom.tlb.office.excel.*
def testsuitename=testRunner.testCase.testSuite.name
def testcasename=testRunner.testCase.name
groovyUtils = new com.eviware.soapui.support.GroovyUtils( context )
def results = testRunner.results
f = new File( "C:\\Users\\%user%\\Documents\\Downloads\\Smoke Test Result\\result.txt")
for( r in testRunner.results )
{
f.append(r.testStep.name + "," + r.status + "\r\n")
}`
Output is something like this:
step-1-name,FAILED
step-2-name,OK
step-3-name,OK
I am looking for output with error message
step-1-name,FAILED,Error message,Reporting Entity
step-2-name,OK,null,null
step-3-name,OK,null,null
I have already fetched Error message & entity using XPATH in properties.
A Gatling scenario with an exec chain. After a request, returned data is saved. Later it's processed and depending on the processing result, it should either fail or pass the test.
This seems like the simplest possible scenario, yet I can't find any reliable info how to fail a test from within an exec block. assert breaks the scenario and seemingly Gatling (as in: the exception throw doesn't just fail the test).
Example:
// The scenario consists of a single test with two exec creating the execChain
val scn = scenario("MyAwesomeScenario").exec(reportableTest(
// Send the request
exec(http("127.0.0.1/Request").get(requestUrl).check(status.is(200)).check(bodyString.saveAs("MyData")
// Process the data
.exec(session => {
assert(processData(session.attributes("MyData")) == true, "Invalid data");
})
))
Above the scenario somewhere along the line "guardian failed, shutting down system".
Now this seems a useful, often-used thing to do - I'm possibly missing something simple. How to do it?
You have to abide by Gatling APIs.
With checks, you don't "fail" the test, but the request. If you're looking for failing the whole test, you should have a look at the Assertions API and the Jenkins plugin.
You can only perform a Check at the request site, not later. One of the very good reasons is that if you store the bodyString in the Sessions like you're doing, you'll end using a lot of memory and maybe crashing (still referenced, so not garbage collectable). You have to perform your processData in the check, typically in the transform optional step.
were you looking for something like
.exec(http("getRequest")
.get("/request/123")
.headers(headers)
.check(status.is(200))
.check(jsonPath("$.request_id").is("123")))
Since the edit queue is already full.
This is already resolved in the new version of Gatling. Release 3.4.0
They added
exitHereIf
exitHereIf("${myBoolean}")
exitHereIf(session => true)
Make the user exit the scenario from this point if the condition holds. Condition parameter is an Expression[Boolean].
I implemented something using exitHereIfFailed that sounds like exactly what you were trying to accomplish. I normally use this after a virtual user attempts to sign in.
exitHereIfFailed is used this way
val scn = scenario("MyAwesomeScenario")
.exec(http("Get data from endpoint 1")
.get(request1Url)
.check(status.is(200))
.check(bodyString.saveAs("MyData"))
.check(processData(session.attributes("MyData")).is(true)))
.exitHereIfFailed // If we weren't able to get the data, don't continue
.exec(http("Send the data to endpoint 2")
.post(request2Url)
.body(StringBody("${MyData}"))
This scenario will abort gracefully at exitHereIfFailed if any of the checks prior to exitHereIfFailed have failed.