Gatling Performance and load testing with Camunda BPMN - scala

I am working on an application using Camunda BPMN. This application calls restful
services for fetching or saving or updating data in the backend. I want to write load performance tests for this application. I am able to test indivisual endpoints with scala scripts in gatling. But how to test one entire BPMN workflow ? Is there any headstart material that I can refer?

Load and performance testing is a very complicated topic and usually depends on number of external environment factors.
I think a good starting point would be a user guide and our examples repository to get general idea on testing practices.
Once you have chosen your way of testing the whole process you then would have to identify parameters that simulate high load in your environment and adjust them in your test.

Related

xtext web Load Testing

Good day...
I have developed a DSL in xtext web using eclipse. It has a login page and multiple users can login from anywhere...
Now my requirement is to test the load of this web DSL. I need to test:
"certain number of users in parallel"
how conflicts are avoided/managed"
"the perceived latency experienced by the users (the lower the better)"
Please guide me how to perform this load and stress test on Jetty server.
Thanks,
If you're talking about xtext-web it's just a matter of sending proper HTTP requests simulating your users accessing the endpoint.
Just choose a load testing tool which supports HTTP protocol (the vast majority of them does) and implement your test scenario steps. If you're looking for a Java-based one there are Apache JMeter and Gatling.

Running Multiple Deep Learning Models as microservices where an aggregation microservice calls all of them to collect all outputs

I have an endpoint which calls multiple Deep Learning Models (huggingface transformer models to be run on cpu). Aggregates all their outputs based on a logic and then sends out a response to the users request (all these modules need to be run in parallel). I want to take it to the next step and move it to K8s to take advantages of its scalability.
The Plan under consideration
The current plan is to make each deep learning model a small fastAPI endpoint wherein all of these models can be deployed as microservices then create another microservice which on a user's request will call these microservices and aggregate their output.
Does this seem like a good approach?
Secondly do you think I should look into frameworks such as Serving provided by MlFlow and KubeFlow (although currently none of them offer any way to natively call these model microservices). If you are familiar with Ray.io it does offer to build such pipelines and call them via ServeHandlers but given they are still under development I am facing a lot of bugs when trying to deploy it on K8s. I would like to move to a more battle tested architecture. Would love to here your suggestions to how I should move forward with this?

How can we perform performance testing of API if I am using using Rest Assured?

How can we perform performance testing of API if I am using using Rest Assured?
I am planning to use Rest Assured for API testing but also want the same tests to carryout the performance testing. Is there a way we can achieve the performance testing by integrating another tool/framework with Rest Assured?
If you need just to run your RestAssured requests in a multithreaded manner you can consider using a micro-benchmark library like jmh
However a better idea would be converting your Rest Assured tests into normal HTTP-based load test, this way you will get HTTP protocol related metrics
Choose a load testing tool which provides HTTP recording capabilities via proxy. You can check out Open Source Load Testing Tools: Which One Should You Use? article for some tools listed and compared
Configure RestAssured to use the load testing tool as the proxy
Run your RestAssured test
Load testing tool should capture the requests which you should be able to replay with multiple virtual users

Where to write tests for a Frontend/Backend application?

I want to write a web application with a simple Frontend-Backend(REST API) architecture.
It's not clear to me where and how to write tests.
Frontend: should I write tests mocking API responses and testing only UX/UI?
Backend: should I write here API call testing and eventually more fine grained unit testing on classes?
But in this way I'm afraid that Frontend testing is not aware of real API response (because it's mocking independently from the backend).
On the other side if I don't mock API response and use real response from backend, how can the Frontend client prepare the DB to get the data he wants?
It seems to me that I need 3 kind of testing types:
- UX/UI testing: the Frontend is working with a set of mock responses
- API testing: the API is giving the correct answers given a set of data
- Integration testing: The Frontend is working by calling really the backend with a set of data (generated by who?).
There are framework or tools to make this as painless as possible?
It seems to me very complicated (if API spec changes I must rewrite a lot of tests)
any suggestion welcome
Well, you are basically right. There are 3 types of tests in this scenario: backend logic, frontend behaviour and integration. Let's split it:
Backend tests
You are testing mainly the business logic of the application. However, you must test the entire application stack: domain, application layer, infrastructure, presentation (API). These layers require both unit testing and integration testing, plus some pure black box testing from user's perspective. But this is a complex problem on this own. The full answer would be extremely long. If you are interested in some techniques regarding testing applications in general - please start another thread.
Frontend behaviour
Here you test if frontend app uses API the right way. You mock the backend layer and write mostly unit tests. Now, as you noticed - there can be some problems regarding real API contract. However, there are ways to mitigate that kind of problems. First, a link to one of these solutions: https://github.com/spring-cloud/spring-cloud-contract. Now, some explanation. The idea is simple: the API contract is driven by consumer. In your case, that would be the frontend app. Frontend team cooperate with backend team to create a sensible API, meeting all of client's needs. Frontend tests are therefore guaranteed to use the "real API". When client tests change - the contract changes, so the backend must refactor to new requirements.
As a side note - you don't really need to use any concrete framework. You can follow the same methodology if you apply some discipline to your team. Just remember - the consumer defines the contract first.
Integration tests
To make sure everything works, you need also some integration e2e testing. You set up a real test instance of your backend app. Then, you perform integration tests using the real server instead of fake mockup responses. However, you don't need (and should not) duplicate the same tests from other layers. You want to test if everything is integrated properly. Therefore, you don't test any real logic. You just choose some happy paths, maybe some failure scenarios and perform these tests from user's perspective. So, you assume nothing about the state of the backend app and simulate user interaction. Something like this: add new product, modify product, fetch updated product or check single authentication point. That kind of tests - not really testing any business logic, but only if real API test server communicates properly with frontend app.
Talking about the tools, it depends on your language preferences and/or how the app is being constructed.
For example, if your team feels comfortable with javascript, it could be interesting to use frameworks like Playwright or WebdriverIO (better if you plan to test mobile) for UI and integration tests. This frameworks can work together with others more specialised in API testing like PactumJS with the plus that can share some functions.
Organising the test correctly you will not have to do a big extra work if some API specs changes.

Switching to Gatling for Load Testing

I'd like to use Gatling for REST performance and scalability web service testing. I'm currently using JMeter for this as I wasn't aware of gatling when I started this project. Gatling would integrate better and would be better for the project a number of reasons.
I'd like to ask one main question:
Obviously, there's a lot of overhead in configuring Gatling with the correct web service information. I've already done this in JMeter and I'm not keen to do it again. For one of the sub-projects, we have a WADL but we have no such thing for the other. Is it possible, out of the box, to import:
a. JMeter test plans and
b. WADL documents
into Gatling?
I've looked through the docs but unfortunately I can't find anything that references these.
No, Gatling has neither.
Building a jmx converter is something we might investigate in 2013, as you're not the first one to ask for it. At this point, I'm a bit skeptical, as the logic and the configuration of the 2 JMeter and Gatling are quite different, so the features and the way to use them don't map 1:1.
The easiest way to work with REST APIs is to use the recorder, so you'd dump request bodies as template and then inject data into them. See http://gatling.io/docs/2.1.6/http/http_request.html#request-body
If you work with JSON, you can use our JsonPath (or standard regex) checks in order to make assertions on the response body, or even capture data. See http://gatling.io/docs/2.1.6/http/http_check.html#defining-the-check-type
Using HttpSampler with Raw Post body And last 2.8 version is the right way to test Webservices.
Is it the way you are doing it ?
Upcoming 2.9 has new performance improvements related to memory and cpu consumed by Post Processors.
Regarding a, I don't think so.