I am doing load test for an api which average response time is 5 sec
in my script i setup constantUserPerSecond 2 and duration 150 second
.inject(constantUsersPerSec(2) during (150 seconds)),
will it generate 2 request per second ? or less , because of 1 request will take 5 second to complete ?
constantUsersPerSec(2) will start a new user executing the scenario every .5 seconds or so. For this sort of injection profile gatling doesn't take into consideration how long it takes for a request to complete.
Related
So I have the following scenario:
setUp(scenario.inject(
nothingFor(30 seconds), // 1
rampUsers(10) during (30 seconds),
nothingFor(1 minute),
rampUsers(20) during (30 seconds)
).protocols(httpconf)).maxDuration(3 minutes)
I expected this scenario to start by doing nothing for 30 seconds, ramping up 10 users over 30 seconds, do nothing(pause) for a minute and finish by ramping up 20 users for 30 seconds.
But what I got is a 30 second pause, ramp up 10 users over 30 seconds, steady state of 10 users for a minute and then an additional ramp up of 20 users. (I ended up running 30 users)
What am I missing here?
The injection profiles only specify when users start a scenario, not how long they're active for - that will be determined by how long it takes for a user to finish the scenario. So when you ramp 10 users over 30 seconds one user will start the scenario every 3 seconds, but they keep running until they finish (however long that is). I'm guessing your scenario takes more than a couple of minutes for a user to complete.
So, I am using JMeter's throughput shaping timer to test the performance of our REST Server. I noticed a few things i did not expect.
First of all my setup details :
1)JMeter Version : 3.0 r1743807
2)JMX file : DropBox Link
Now , my questions :
1)The throughput shaping timer is configured to run for 60 seconds(100rps - 30 secs, 200 rps - next 30 secs). But the actual test runs only for 3 seconds as shown below. Why?
2) As per the plan the number of requests per second should go from 100 - 200. But here it seems to decrease , as in above.
3)As per this plugin's documentation , the number of thread groups = desired requests per second * server response time / 1000 . Is it because how this plugin internally works or is it a simple logic i am missing?
The issue is with the Thread Group settings.
You only one have 1 iteration and ramp up 300 users in 1 second. So if Jmeter can send all the 300 requests and get the response, JMeter will finish the test immediately. Those timer settings will apply only if the test is running.
If you need the test to run for some duration (say 60 seconds), then set the loop count to forever & duration to 60
I have a registration REST API which i want to test as -
Register 15000 users and pound the server with repeated incident reports (varying traffic with max being 100 per minute, min being 1 per 24 hrs and avg being one per minute ) over a period of 48 hours.
Which tool can I use to test stability of my REST API?
For pounding the server with incidents over a period of time, you can use http://runscope.com/ .
It is helpful for testing APIs. You can just trigger events in runscope over a period time or schedule it to hit the server as required.
For example- I need to check that for 1000 users it responds in 3 seconds.
Is the number of users and response times configurable?
This answer targets Gatling 2.
You can set the target number of users by configuring the "injection profile" of your simulation:
setUp(scn.inject(atOnceUsers(1000)) // To start all users at the same time
setUp(scn.inject(rampUsers(1000) over (30 seconds) // To start gradually, over 30 seconds
For more information, please refer to the injection DSL documentation
If you want to check that all your users responds in less than 3 seconds, one way to ensure this is Gatling's assertions:
setUp(...).assertions(global.responseTime.max.lessThan(3000))
If this assertion fails, meaning at least one user responded in more than 3 seconds, Gatling will clearly indicate the failure after your simulation is completed.
When we discuss performance of a distributed system we use the terms tp50, tp90, tp99.99 TPS.
Could someone explain what do we mean by those?
tp90 is a maximum time under which 90% of requests have been served.
Imagine you have times:
10s
1000s
100s
2s
Calculating TP is very simple:
sort all times in ascending order: [2s, 10s, 100s, 1000s]
find latest item in portion you need to calculate. For TP50 it will ceil(4*.5)=2 requests. You need 2nd request. For TP90 it will be ceil(4*.9)=4. You need 4th request.
get time for the item found above. TP50=10s. TP90=1000s
Say if we are referring to in-terms of performance of an API, TP90 is the max time under which 90% of requests have been served.
TPx: Max response time taken by xth percentile of requests.
time taken by 10 requests in ms [2,1,3,4,5,6,7,8,9,10] - there are 10 response times
TP100 = 10
TP90 = 9
TP50 = 5