I am new to locust framework. What is RPS shown in web UI (Top right).
I am trying below 2 use cases :
use 10 users with hatch rate of 5, i receive 6 as RPS
use 20 users with hatch rate of 5, i receive 11 as RPS.
As no changes have been made to server side, why is it that the RPS changes as per users. Is it on expected or am i missing something ?
RPS is "Requests Per Second" (AKA throughput)... it's a rate.
In your example, you have doubled the number of users... because you are applying more virtual users, your RPS has increased.
Related
I have the ramp down test that decrease the amount of users every time in Gatling. For example, 5 users every 1 minute, from 15 users to 5 users. For the user's injections I use constantConcurrentUsers():
setUp(
Test.inject(
(15) to 5 by (-5)
.map(i => constantConcurrentUsers(i) during 60)
)
Screenshot with constantConcurrentUsers()
But users arrive with some spikes. I want it to be more constant like in atOnceUsers() where we don't have any spikes. Is there another way to do it in Gatling?
But users arrive with some spikes.
This is just a consequence of this chart displaying "active users" metric, not "concurrent users" one. See doc.
I am simplifying my problem with the following scenario:
3 friends share a loyalty card. The card has two restrictions
can be used max 10 times (does not matter which uses the card, ie
friend_a can use it 10 times.
the max money in the card is 200. So with 1 "event" of value = 200 the card is "completed".
I am using a kafka producer that sends events in the kafka cluster like this
{ "name": "friend_1", "value": 10 }
{ "name": "friend_3", "value": 20 }
the events are posted to a topic that is connected with a kafka stream that groups by key and doing aggregation to sum the money spent. That seems to work, however I am facing a "concurrency issue"
Let's imaging the card is used 9 times, so only 1 time remains to be used and the total money spent is 190, that means there are 10 units left to spend.
So friend_2 wants to buy something that costs 11 units (which should not be allowed) and friend_3 wants to buy something that costs 9 units which should be allowed. Friend_3 will modify the state using the card for the 10th time. All other future attempts should not modify something.
So it seems reasonable for the card user to know if the event he sent modified the max used number and the total count. How can I do it in kafka? Using the streams aggregation I can always increase the values, but how can do I know if my action "modified the state" of the card?
UPDATE: the card user should immediately get a negative feedback if the transaction validates a rule.
From how I understand your question there are a few options.
One option is to derive a new stream after the aggregation that you filter() for data that would modify 'the state' of the card, like filtering all events that have > 200 units spent or > 10 uses. This stream can then be used to inform the card user(s) that the card has been spent, e.g. by sending an email. This is approach can be implemented solely with the DSL.
When more flexibility or tighter control is needed, another option is to use the Processor API (which you can integrate with the DSL so most of your code could keep using the DSL), where you implement the aggregation step yourself (working with state stores that you attach to a Transformer or Processor). During the aggregation, you can implement the logic that checks whether the incoming event is valid (in your example: friend_3 with 9 units is valid, friend_2 with 11 units is not). If it is valid, the aggregation increases the card's counters (units and uses), and that's it. If it is invalid, the event is discarded and will not modify the counters, the Transformer/Processor can emit a new event to another stream that tells the card user(s) that something didn't work. You can similarly implement the functionality to inform users that a card has been fully used, that a card is no longer usable, or any other 'state change' of that card.
Also, depending on what you want to do, take a look at the interactive queries feature of Kafka Streams. Sometimes other applications may want to do quick point lookups (queries) of the latest state of something, like the card's state, which can be done with interactive queries via e.g. a REST API.
Hope this helps!
I have a registration REST API which i want to test as -
Register 15000 users and pound the server with repeated incident reports (varying traffic with max being 100 per minute, min being 1 per 24 hrs and avg being one per minute ) over a period of 48 hours.
Which tool can I use to test stability of my REST API?
For pounding the server with incidents over a period of time, you can use http://runscope.com/ .
It is helpful for testing APIs. You can just trigger events in runscope over a period time or schedule it to hit the server as required.
I was doing some latency/performance testing for sending push notifications with Azure Notification Hub by consecutively sending many notifications in a foreach loop. It worked fine for 100 "SendNotification" requests, altough it was relatively slow (14s), but I got a QuotaExceededException for 1000 requests in a row:
[QuotaExceededException: The remote server returned an error: (403)
Forbidden. The request was terminated because the namespace
pushnotification-testing is being throttled. Please wait 60 seconds
and try again. TrackingId:...
Even when I don't wait for 60 seconds as advised, I can again execute 100 consecutive requests, but 1000 requests in a row always fail... Anything slightly above 100 consecutive requests fails most of the time...
I couldn't find any documentation on these limitations. This should be documented somewhere, so I can be sure Azure Notification Hubs will fit my needs.
The answer to this question says
There is a throttling for CRUD operation's rate. Quotas depend on tire
your are but it is not going to be less then 2000 operations per
minute per namespace any way. If quota is exceed then service returns
403.
For me, it seems to be less then 2000 operations. By the way, I'm using "FREE" tier for testing, but I guess we would switch to "STANDARD" for production.
Has anyone similar experiences or knows where to look for more information?
In particular, what are the operation quota limitations per timefram for the different tiers of Azure Notification Hubs?
UPDATE1: It's weird, but I sending 1000 requests in parallel works most of the time, but consecutively it fails on the 101st request.
For my best knowledge for right now NH has following limitations on number of SENDS (not registrations) per namespace per minute per NH machine:
Free tire: 100
Basic tire: 900
Standard tire: 11500
Massive sending in parallel allows to send more because calls are very likely to be routed on different machines.
Does anyone know whats the limit for batch requests when using FBs graph API?
From their documentation:
Limit
We currently limit the number of batch requests to 20.
Source
That's not clear. Is that 20 per 600 seconds? Per day? Total for one app ever?
It is 50 now. 50 request/batch
It means that 20 individual requests are allowed to be batched together into a single batched request, which saves you from sending 20 individual http requests over at the same time.
If you have more than twenty (20) you can build an array and then break them into groups of 20 or less and loop them thru your PHP in one "session". We have one app that will run 600 - 700 OG requests but it is sloooooooow, up to 300 seconds, some times, depending on FB.