I have a gatling test suite and in the simulation script I have a few assertions testing that the average mean response time of groups/requests meets a certain threshold. I am using gradle to drive the suite and some assertions fail as expected.
However, when I look at the gatling test reports I see no indication of failed assertions. How do I expose the fact that certain assertions have failed based on looking at the report alone?
The gatling suite is integrated in the CI which spits out the gatling report as an artefact. We want visibility in the team as to which assertions failed by looking at the report.
That's not possible at the moment. Could you open a feature request on github, please?
Related
I have an API test that run within Azure Devops pipeline using TestDataMethod within MSTest. My tests run fine, but problem is all tests reported have the same name, so kind of difficult to figure out which test failed. This works fine in Visual Studio in my local. Is there a way to fix this? I found an old thread for the same issue but has no solution.
Screenshot
As you can see we cannot report the result for each subtest at the root level which also mentioned in the ticket you mentioned. For more information about test result, you could refer to Test analytics, which provides near real-time visibility into your test data for builds and releases. It helps improve the efficiency of your pipeline by identifying repetitive, high impact quality issues.
I have Selenium maven tests running in my release pipelines.
Now I want to automatically update the outcome results in test cases in test plans.
I know there is a way of doing this for C# based selenium tests with vstest step.
But could someone help with logic to do this for selenium maven tests?
I am using rest assured to automate my project. In the same project, I want to do performance testing in the API. I want to know how can I achieve this task??
If you have existing set of tests and want to run them in multithreaded manner the options are in:
use ExecutorService to run them in parallel
"wrap" them into functions with JMH annotations
use a load testing tool capable of running JUnit tests (or whatever is your xUnit framework) like JUnit Sampler of Apache JMeter
However the above approaches will only allow you to kick off your tests in parallel and you won't be able to collect a lot of metrics like:
number of active threads
number of hits per seconds
response time
HTTP-protocol-based metrics like response code, connect time, latency
so it makes sense considering converting your restassured tests into "real" tests driven by the "normal" load testing tool, the majority of load testing tools provide record-and-replay capability by exposing a HTTP Proxy so if you run your restassured tests via this proxy the load testing tool will capture them and convert them into corresponding HTTP requests.
I have a question that's very similar to what's discussed here:
Integration Test of REST APIs with Code Coverage
I deployed a war file that exposes the REST APIs to a web server and I'm using TestNG to write test cases for the REST APIs. I'm not unit testing - I'm only end-to-end / integration testing. Currently, I'm running test cases from eclipse in my machine.
My goal is to get coverage reports on the TestNG test cases.
Since the tests are local to my machine and the REST API is deployed in another server, EclEmma doesn't provide any meaningful data when I run the tests cases in my machine.
Is there a way to point EclEmma to the web server instead of my local machine and get the code coverage report?
Would it be better/possible to include the tests in the war file and run the tests from the web server? That should allow me to get the meaningful code coverage report, right?
The easiest way forward in cases like this is normally to start the web server inside of your IDE and run tests with coverage measuring in there. Even better to start the web server from within the tests - then a build tool like maven can also do code coverage reporting.
I'm currently using POSTMAN and Advanced Rest Client application to test my REST endpoints. These tool are great and make it very nice for testing. I am currently entering these calls manually and testing. However, I have a number of endpoints which have prerequisite calls that need to be made to handle their dependencies.
This is not a big deal, however if there is a way I can chain these calls to run in a certain flow waiting for the prerequisite to be completed before running the next I could harshness this to craft a fully automated API testing suite. Which, would give more flexibility rather than manually entering them.
Postman let's you do this using the Jetpacks upgrade. You can then use the recently released Newman tool to run them through the command line.