We're trying to stress test our REST-ish app with Gatling. We want our users to make a post with a different fileBody every request.
Our scenario looks like:
scenario("100%")
.during(15 minutes) {
exec(requestStream.next())
.pause(118 seconds, 120 seconds)
}
.users(2)
.delay(2 minutes)
.protocolConfig(httpConf)
...build up several scenarios...
setUp(severalScenarios)
This runs fine but it appears that the block with the exec is only executed one time when each scenario is built for the first time. We thought that the block would be executed every time the during(...) loop comes around giving each user a new Request from the iterator to run every 15 minutes.
Are we missing something? Is there a smarter way of doing this?
No, that's not the way the DSL works. The DSL elements are actually builders that are resolved once and for all when the simulation is loaded.
What you want is inject dynamic data into your scenario elements, and you have to use Feeders, user Session, Gatling EL, etc. What does your requestStream look like?
Related
I'm working with code that uses a tumbling window of one day, and would like to send early results to a different DataStream on an hourly basis.
I understand that triggers are a way to go here, but don't really see how it would work.
The current code is as follows:
myStream
.keyBy(..)
.window(TumblingEventTimeWindows.of(Time.days(1)))
.aggregate(new MyAggregateFunction(), new MyProcessWindowFunction())
In my understanding, I should register a trigger, and then on its onEventTime method get a hold of a TriggerContext and I can send data to the labeled output from there. But how do I get the current state of MyAggregateFunction from there? Or would I need to my own computation here inside of onEventTime()?
Also, the documentation states that "By specifying a trigger using trigger() you are overwriting the default trigger of a WindowAssigner.". Would my one day window then still fire correctly, or do I need to trigger it somehow differently?
Another way of doing this is creating two different operators - one that windows by 1 hour, and another that windows by 1 day. Would triggers be a preferred approach to that?
Rather than using a custom Trigger, it would be simpler to have two layers of windowing, where the hourly results are further aggregated into daily results. Something like this:
hourlyResults = myStream
.keyBy(...)
.window(TumblingEventTimeWindows.of(Time.hours(1)))
.aggregate(new MyAggregateFunction(), new MyProcessWindowFunction())
dailyResults = hourlyResults
.keyBy(...)
.window(TumblingEventTimeWindows.of(Time.days(1)))
.aggregate(new MyAggregateFunction(), new MyProcessWindowFunction())
hourlyResults.addSink(...)
dailyResults.addSink(...)
Note that the result of a window is not a KeyedStream, so you will need to use keyBy again, unless you can arrange to leverage reinterpretAsKeyedStream (docs).
Normally when I get to more complex behavior like this, I use a KeyedProcessFunction. You can aggregate (and save in state) hourly and daily results, set timers as needed, and use a side output for the hourly results versus the regular output for the daily results.
There are quite a few questions here. I will try to ask all of them. First of all, if You specify Your own trigger using trigger() this means You are going to effectively override the default trigger and thus the window may not work the way it would by default. So, if You for example if You create the 1 day event time tumbling window, but override a trigger so that it fires for every 20th element, it will never fire based on event time.
Now, after Your custom trigger fires, the output from MyAggregateFunction will be passed to MyProcessWindowFunction, so It will work the same as for the default trigger, you don't need to access the MyAggregateFunction from inside the trigger.
Finally, while it may be technically possible to implement trigger to trigger partial results every hour, my personal opinion is that You should probably go with the two separate windows. While this solution may create a slightly larger overhead and may result in a larger state, it should be much clearer, easier to implement, and finally much more error resistant.
I use rampUsers(20)over (120) in my gatling load test. But I got the following result
I expected the active user should be constant during the test.
rampUsers injects the defined numbers of users linearly over a given time. So your use of rampUsers(20)over (120) will result in gatling starting one user every 6 seconds. The graph you're getting shows this, but what might be confusing is that since your scenario completes in less than 6 seconds there's never more than one user active at a time.
if you're aiming for 20 concurrent users over 120 seconds, there's a different injection profile for that...
constantConcurrentUsers(20) during (120 seconds)
I want to test an Eclipse RPC with SWTBot. The tests itself run fine. But the problem is performance: The first testcases complete in about 1 minute, after a while they take 1 hour or longer. Each testcase seems to take significantly longer than its predecessor although the testcases aren't more complex.
I was suspecting that I have any operations in my tests which cause the bot to wait for timeout a lot but that's not the case. A main culprit seems to be SWTBotMenu#contextMenu which takes a lot of the time and I can't figure out why, it's simple operations like tree().contextMenu("Save").click();
You can check once SWTBotPreferences.TIMEOUTconstant. By default it is 5000ms.
I think it should not take one hour time provided that first test case completes in one minute.
How much time application takes if same user actions are done manually instead of SWTBOT. Also try debugging once where exactly it stops in your SWTBOT
I currently have an app powered by parse that monitors the wait times for a certain amusement park. On parse, each ride has its own class file and in each class there is an object with a string entitled "waitTime" which has a string that has the most recent wait time submitted. I would like to use cloud code to reset all of the waitTime sections of each object to "0" at 1:00 AM each morning. I have no experience with Cloud Code or anything like it. How would I go about doing something like this? Thank you in advance for your help!
Have a job that runs every 5 minutes, comparing the current server time to any scheduled times (e.g. 1am, including timezone issues), and if it matches run that task (in your case resetting the "waitTime").
It is common to have this single master job trigger multiple different tasks (functions in your cloud code) using different rules such as time or a manual job queue, that's why I suggest this pattern.
In Locust Load test Enviroment tasks are defined and are called randomly.
But if i want a task to be performed just after a specific task. Then how do i do it?
for ex: after every 'X' url call i want 'Y' url to be called based on the response of 'X'.
In my experience, I found that it's better to model Locust tasks as completely independent of each other, and each of them covering a user scenario or behavior (eg. customer logs in, searches for a book and adds it to the cart). This is mostly because that's a closer simulation of the user's behavior.
Have you tried just having the multiple requests on the same task, and just if / else based on your responses? This slide from Carl Byström's talk follows said approach.
You just have to make a sequential gets or posts. When you define your task do something like this:
#task(10)
def my_task(l):
l.client.get('/X')
l.client.get('/Y')
There's an option to create a custom task set inherited from TaskSequence class.
Then you should add seq_task decorators to all task set methods to run its tasks sequentially.
https://docs.locust.io/en/latest/writing-a-locustfile.html#tasksequence-class