Jmeter Throughput shaping timer Questions - rest

So, I am using JMeter's throughput shaping timer to test the performance of our REST Server. I noticed a few things i did not expect.
First of all my setup details :
1)JMeter Version : 3.0 r1743807
2)JMX file : DropBox Link
Now , my questions :
1)The throughput shaping timer is configured to run for 60 seconds(100rps - 30 secs, 200 rps - next 30 secs). But the actual test runs only for 3 seconds as shown below. Why?
2) As per the plan the number of requests per second should go from 100 - 200. But here it seems to decrease , as in above.
3)As per this plugin's documentation , the number of thread groups = desired requests per second * server response time / 1000 . Is it because how this plugin internally works or is it a simple logic i am missing?

The issue is with the Thread Group settings.
You only one have 1 iteration and ramp up 300 users in 1 second. So if Jmeter can send all the 300 requests and get the response, JMeter will finish the test immediately. Those timer settings will apply only if the test is running.
If you need the test to run for some duration (say 60 seconds), then set the loop count to forever & duration to 60

Related

Limit 100 requests per min setup in Jmeter is not working

I have tried couple of suggestions as mentioned in other sites on how to configure/Limit 100 requests per minute for a given REST endpoint for a single user. its not working !
Can someone please guide me to setup on how to limit a 100 requests for a given REST endpoint?
Thankyou in Advance!!
The easiest way is adding Constant Throughput Timer however be aware that it's precise enough on minute level so you will have to let your test to run for at least a minute before you start seeing the rate limiting, if your test throughput is higher during the first minute - consider playing with ramp-up.
If you have only 1 user and your test runs for a minute or less you will have to consider the following options:
Precise Throughput Timer
Throughput Shaping Timer
the latter one is extremely easy to use and it provides visual way of defining the target throughput:

Gatling Scenario Response time

I am doing load test for an api which average response time is 5 sec
in my script i setup constantUserPerSecond 2 and duration 150 second
.inject(constantUsersPerSec(2) during (150 seconds)),
will it generate 2 request per second ? or less , because of 1 request will take 5 second to complete ?
constantUsersPerSec(2) will start a new user executing the scenario every .5 seconds or so. For this sort of injection profile gatling doesn't take into consideration how long it takes for a request to complete.

How does jmeter starts sending requests to server

If Thread: 100, Rampup: 1 and Loop count: 1 is the configuration, how will jmeter start sending requests to the server?
Request will be sent 1 req/sec or all requests will be sent all at once to server?
JMeter will send requests as fast as it can, to wit:
It will start all threads (virtual users) you define in Thread Group within the ramp-up period (in your case - 100 threads in 1 second)
Each thread (virtual user) will start executing Samplers which are present in the Thread Group upside down (or according to the Logic Controllers)
When there are no more samplers to execute or loops to iterate the thread will be shut down
When there are no more active threads left - JMeter test will end.
With regards to requests per second - it mostly depends on your application response time, i.e.
if you have 100 virtual users and response time is 1 second - you will get 100 requests/second
if you have 100 virtual users and response time is 2 seconds - you will get 50 requests/second
if you have 100 virtual users and response time is 500 milliseconds - you will get 200 requests/second
etc.
I would recommend increasing (and decreasing) the load gradually, this way you will be able to correlate increasing load with increasing throughput/response time/number of errors, etc. while releasing all threads at once will not tell you the full story (unless you're doing a form of spike testing, in this case consider using Synchronizing Timer)
JMeter's ramp-up period set as 1 means to start all 100 threads in 1 second.
This isn't recommended settings as describe below
The ramp-up period tells JMeter how long to take to "ramp-up" to the full number of threads chosen. If 10 threads are used, and the ramp-up period is 100 seconds, then JMeter will take 100 seconds to get all 10 threads up and running. Each thread will start 10 (100/10) seconds after the previous thread was begun. If there are 30 threads and a ramp-up period of 120 seconds, then each successive thread will be delayed by 4 seconds.
Ramp-up needs to be long enough to avoid too large a work-load at the start of a test, and short enough that the last threads start running before the first ones finish (unless one wants that to happen).
Start with Ramp-up = number of threads and adjust up or down as needed.
See also Can i set ramp up period 0 in JMeter?
bear in mind that with low rampup and many threads, you may be limited by local resources, so your results may be a measurement of client capability rather than server.

VSTS Load Test: single request by many users over time

I have an end-point, let's call it https://www.ajax.org/api/v1/offers.
The scenario is that 80.000 users will access this end-point one time each, and they will all make this one request within 60 minutes.
How exactly do you model this in a VSTS Load Test?
Thanks in advance!
Create a ".webtest" that does the request.
The load of 80000 requests in one hour is about 1333 per minute, which is about 22 per second. (Check: 22 * 60 *60 = 79200 and 23 * 60 * 60 = 82800, so the 22 or 23 is about right.) If each request takes on average one second then will need 23 Virtual Users (VUs) to create the total load. If each request takes on average two seconds then would need about 46 VUs. (Check: (46 / 2) * 60 * 60 = 82800 and (45 /2) * 60 * 60 = 81000. So still about right.) Even though there is only one test must specify a test mix, so use "Test mix based on number of tests started".
Once the average request time is known when under load then its value can be used in the style above to set the required number of VUs.
Another approach starts with the above sums to find minimum numbers of VUs but uses a "Test mix based on user pace". Suppose we specify 100 VUs (which is normally considered a modest load). Then we need each VU to process 80000/100 = 800 webtests per hour and we just specify that 800 in the test mix window. -- On reflection this may be the better approach but I think the analysis above is useful.
To simulate 80000 different users ensure that the "Percentage of new users" is 100 in the scenario properties.
If you want exactly 80000 requests in the run then specify that as the "Number of iterations" in the "Run settings" along with "Use test iterations" set to "true". If you want about 80000 then I recommend setting "Use test iterations" to "false" and giving a "Run duration" of one hour.

How to test the page loading time with Gatling

For example- I need to check that for 1000 users it responds in 3 seconds.
Is the number of users and response times configurable?
This answer targets Gatling 2.
You can set the target number of users by configuring the "injection profile" of your simulation:
setUp(scn.inject(atOnceUsers(1000)) // To start all users at the same time
setUp(scn.inject(rampUsers(1000) over (30 seconds) // To start gradually, over 30 seconds
For more information, please refer to the injection DSL documentation
If you want to check that all your users responds in less than 3 seconds, one way to ensure this is Gatling's assertions:
setUp(...).assertions(global.responseTime.max.lessThan(3000))
If this assertion fails, meaning at least one user responded in more than 3 seconds, Gatling will clearly indicate the failure after your simulation is completed.