Controlling requests per second and timeout threshold in Gatling - scala

I am working on a Gatling simulation. For the life of me, I cannot get my code to reach 10000 requests per second. I have read the documentation and I keep messing with different methods and whatnot but my requests per second seems capped at 5000 requests per second. I have attached my current iteration of my code. The URL and path information is blurred out. Assume that I have no issue with the HTTP part of my simulation.
package computerdatabase
import io.gatling.core.Predef._
import io.gatling.http.Predef._
import scala.concurrent.duration._
//import assertions._
class userSimulation extends Simulation {
object Query {
val feeder = csv("firstfileSHUF.txt").random
val query = repeat(2000) {
feed(feeder).
exec(http("user")
.get("/path/path/" + "${userID}" + "?fullData=true"))
}
}
val baseUrl = "http:URL:7777"
val httpConf = http
.baseURL(baseUrl) // Here is the root for all relative URLs
val scn = scenario("user") // A scenario is a chain of requests and pauses
.exec(Query.query)
setUp(scn.inject(rampUsers(1500) over (60 seconds)))
.throttle(reachRps(10000) in (2 minute),
holdFor(3 minutes))
.protocols(httpConf)
}
Additionally, I would like to set the maximum threshold for a timeout to be 100ms. I have tried to do this with assertions and also editing the configuration files but it never seems to show up during the tests or in my reports. How can I set a request to KO if the request took longer than 100ms? Thank you for your help with this matter!

I ended up figuring this out. My code above is correct and I know understand what Stephane, one of the main contributors to Gatling was explaining. The server at the time simply could not handle my RPS threshold. It was an upper bound that was unreachable. After making changes to the server, we could handle this sort of latency. Additionally, I found a way to timeout at 100ms in the configuration file. Specifically, requestTimeout = 100 will cause the timeout behavior I was looking for.

Related

Akka stream hangs when starting more than 15 external processes using ProcessBuilder

I'm building an app that has the following flow:
There is a source of items to process
Each item should be processed by external command (it'll be ffmpeg in the end but for this simple reproducible use case it is just cat to have data be passed through it)
In the end, the output of such external command is saved somewhere (again, for the sake of this example it just saves it to a local text file)
So I'm doing the following operations:
Prepare a source with items
Make an Akka graph that uses Broadcast to fan-out the source items into individual flows
Individual flows uses ProcessBuilder in conjunction with Flow.fromSinkAndSource to build flow out of this external process execution
End the individual flows with a sink that saves the data to a file.
Complete code example:
import akka.actor.ActorSystem
import akka.stream.scaladsl.GraphDSL.Implicits._
import akka.stream.scaladsl._
import akka.stream.ClosedShape
import akka.util.ByteString
import java.io.{BufferedInputStream, BufferedOutputStream}
import java.nio.file.Paths
import scala.concurrent.duration.Duration
import scala.concurrent.{Await, ExecutionContext, Future}
object MyApp extends App {
// When this is changed to something above 15, the graph just stops
val PROCESSES_COUNT = Integer.parseInt(args(0))
println(s"Running with ${PROCESSES_COUNT} processes...")
implicit val system = ActorSystem("MyApp")
implicit val globalContext: ExecutionContext = ExecutionContext.global
def executeCmdOnStream(cmd: String): Flow[ByteString, ByteString, _] = {
val convertProcess = new ProcessBuilder(cmd).start
val pipeIn = new BufferedOutputStream(convertProcess.getOutputStream)
val pipeOut = new BufferedInputStream(convertProcess.getInputStream)
Flow
.fromSinkAndSource(StreamConverters.fromOutputStream(() ⇒ pipeIn), StreamConverters.fromInputStream(() ⇒ pipeOut))
}
val source = Source(1 to 100)
.map(element => {
println(s"--emit: ${element}")
ByteString(element)
})
val sinksList = (1 to PROCESSES_COUNT).map(i => {
Flow[ByteString]
.via(executeCmdOnStream("cat"))
.toMat(FileIO.toPath(Paths.get(s"process-$i.txt")))(Keep.right)
})
val graph = GraphDSL.create(sinksList) { implicit builder => sinks =>
val broadcast = builder.add(Broadcast[ByteString](sinks.size))
source ~> broadcast.in
for (i <- broadcast.outlets.indices) {
broadcast.out(i) ~> sinks(i)
}
ClosedShape
}
Await.result(Future.sequence(RunnableGraph.fromGraph(graph).run()), Duration.Inf)
}
Run this using following command:
sbt "run PROCESSES_COUNT"
i.e.
sbt "run 15"
This all works quite well until I raise the amount of "external processes" (PROCESSES_COUNT in the code). When it's 15 or less, all goes well but when it's 16 or more then the following things happen:
Whole execution just hangs after emitting the first 16 items (this amount of 16 items is Akka's default buffer size AFAIK)
I can see that cat processes are started in the system (all 16 of them)
When I manually kill one of these cat processes in the system, something frees up and processing continues (of course in the result, one file is empty because I killed its processing command)
I checked that this is caused by the external execution for sure (not i.e. limit of Akka Broadcast itself).
I recorded a video showing these two situations (first, 15 items working fine and then 16 items hanging and freed up by killing one process) - link to the video
Both the code and video are in this repo
I'd appreciate any help or suggestions where to look solution for this one.
It is an interesting problem and it looks like that the stream is dead-locking. The increase of threads may be fixing the symptom but not the underlying problem.
The problem is following code
Flow
.fromSinkAndSource(
StreamConverters.fromOutputStream(() => pipeIn),
StreamConverters.fromInputStream(() => pipeOut)
)
Both fromInputStream and fromOutputStream will be using the same default-blocking-io-dispatcher as you correctly noticed. The reason for using a dedicated thread pool is that both perform Java API calls that are blocking the running thread.
Here is a part of a thread stack trace of fromInputStream that shows where blocking is happening.
at java.io.FileInputStream.readBytes(java.base#11.0.13/Native Method)
at java.io.FileInputStream.read(java.base#11.0.13/FileInputStream.java:279)
at java.io.BufferedInputStream.read1(java.base#11.0.13/BufferedInputStream.java:290)
at java.io.BufferedInputStream.read(java.base#11.0.13/BufferedInputStream.java:351)
- locked <merged>(a java.lang.ProcessImpl$ProcessPipeInputStream)
at java.io.BufferedInputStream.read1(java.base#11.0.13/BufferedInputStream.java:290)
at java.io.BufferedInputStream.read(java.base#11.0.13/BufferedInputStream.java:351)
- locked <merged>(a java.io.BufferedInputStream)
at java.io.FilterInputStream.read(java.base#11.0.13/FilterInputStream.java:107)
at akka.stream.impl.io.InputStreamSource$$anon$1.onPull(InputStreamSource.scala:63)
Now, you're running 16 simultaneous Sinks that are connected to a single Source. To support back-pressure, a Source will only produce an element when all Sinks send a pull command.
What happens next is that you have 16 calls to method FileInputStream.readBytes at the same time and they immediately block all threads of default-blocking-io-dispatcher. And there are no threads left for fromOutputStream to write any data from the Source or perform any kind of work. Thus, you have a dead-lock.
The problem can be fixed if you increase the threads in the pool. But this just removes the symptom.
The correct solution is to run fromOutputStream and fromInputStream in two separate thread pools. Here is how you can do it.
Flow
.fromSinkAndSource(
StreamConverters.fromOutputStream(() => pipeIn).async("blocking-1"),
StreamConverters.fromInputStream(() => pipeOut).async("blocking-2")
)
with following config
blocking-1 {
type = "Dispatcher"
executor = "thread-pool-executor"
throughput = 1
thread-pool-executor {
fixed-pool-size = 2
}
}
blocking-2 {
type = "Dispatcher"
executor = "thread-pool-executor"
throughput = 1
thread-pool-executor {
fixed-pool-size = 2
}
}
Because they don't share the pools anymore, both fromOutputStream and fromInputStream can perform their tasks independently.
Also note that I just assigned 2 threads per pool to show that it's not about the thread count but about the pool separation.
I hope this helps to understand akka streams better.
Turns out this was limit on Akka configuration level of blocking IO dispatchers:
So changing that value to something bigger than the amount of streams fixed the issue:
akka.actor.default-blocking-io-dispatcher.thread-pool-executor.fixed-pool-size = 50

Gatling load testing and running scenarios

I am looking to create three scenarios:
The first scenario will run a bunch of GET requests for 30s
The second and third scenarios will run in parallel and wait until the first is finished.
I want the requests from the first scenario to be excluded from the report.
I have the basic outline of what I want to achieve but not seeing expected results:
val myFeeder = csv("somefile.csv")
val scenario1 = scenario("Get stuff")
.feed(myFeeder)
.during(30 seconds) {
exec(
http("getStuff(${csv_colName})").get("/someEndpoint/${csv_colName}")
)
}
val scenario2 = ...
val scenario3 = ...
setUp(
scenario1.inject(
constantUsersPerSec(20) during (30 seconds)
).protocols(firstProtocaol),
scenario2.inject(
nothingFor(30 seconds), //wait 30s
...
).protocols(secondProt)
scenario3.inject(
nothingFor(30 seconds), //wait 30s
...
).protocols(thirdProt)
)
I am seeing the first scenario being run throughout the entire test. It doesn't stop after the 30s?
For the first scenario I would like to cycle through the CSV file and perform a request for each line. Perhaps 5-10 requests per second, how do I achieve that?
I would also like it to stop after the 30s and then run the other two in parallel. Hence the nothingFor in last two scenarios above.
Also how do I exclude from report, is it possible?
You are likely not getting the expected results due to the combination of settings between your injection profile and your "Get Stuff" scenario.
constantUsersPerSec(20) during (30 seconds)
will start 20 users on scenario "Get Stuff" every second for 30 seconds. So even during the 30th second, 20 users will START "Get Stuff". The injection pofile only controls when a user starts, not how long they are active for. So when a user executes the "Get Stuff" scenario, they make the 'get' request repeatedly over the course of 30 seconds due to the .during loop.
So at the very least, you will have users executing "Get Stuff" for 60 seconds - well into the execution of your other scenarios. Depending on the execution time for you getStuff call, it may be even longer.
To avoid this, you could work out exactly how long you want the "Get Stuff" scenario to run, set that in the injection profile and have no looping in the scenario. Alternatively, you could just set your 'nothingFor' values to be >60s.
To exclude the Get Stuff calls from reports, you can add silencing to the protocol definition (assuming it's not shared with your other requests). More details at https://gatling.io/docs/3.2/http/http_protocol/#silencing

Offer to queue with some initial display

I want to offer to queue a string sent in load request after some initial delay say 10 seconds.
If the subsequent request is made with some short interval delay(1 second) then everything works fine, but if it is made continuously like from a script then there is no delay.
Here is the sample code.
def load(randomStr :String) = Action { implicit request =>
Source.single(randomStr)
.delay(10 seconds, DelayOverflowStrategy.backpressure)
.map(x =>{
println(x)
queue.offer(x)
})
.runWith(Sink.ignore)
Ok("")
}
I am not entirely sure that this is the correct way of doing what you want. There are some things you need to reconsider:
a delayed source has an initial buffer capacity of 16 elements. You can increase this with addAttributes(initialBuffer)
In your case the buffer cannot actually become full because every time you provide just one element.
Who is the caller of the Action? You are defining a DelayOverflowStrategy.backpressure strategy but is the caller able to handle this?
On every call of the action you are creating a Stream consisting of one element, how is the backpressure here helping? It is applied on the stream processing and not on the offering to the queue

Performance issue in play application

I have a play(2.3.0) application that does some database lookups. When there are more than 6 users the application runs into performance problems.
I have narrowed down the problem to a controller with an action that does a sleep of 4 seconds.
A test client calls this action every 500 ms. I can see the the first 6 requests are processesed, and it stops a few seconds(until the 4 seconds sleep have passed) and reads the next 6.
Also: when I open 7 browser windows the 7th will not load(waits for connection).
Looking at the documentation it looks like my problem is blocking io and using the highly synchronous profile should solve my problem.
Therefore I added this profile to my application.conf but nothing changes.
my application.conf looks like this
application.context=/appname/
# Secret key
# ~~~~~
# The secret key is used to secure cryptographics functions.
# If you deploy your application to several instances be sure to use the same key!
application.secret="xxxxx"
play {
akka {
akka.loggers = ["akka.event.slf4j.Slf4jLogger"]
loglevel = WARNING
actor {
default-dispatcher = {
fork-join-executor {
parallelism-min = 300
parallelism-max = 300
}
}
}
}
}
and the action
def performancetestSleep() = Action{ request => {
Thread.sleep(4000)
Ok("hmmm good sleep")
}}
It seems to me the threadpool configuration is ignored. What am I missing here?
What you need for this is really just one thread which handles the 4 second delay - a scheduler. Spawning that many threads defeats the whole point of the architecture that Play has, IMHO. You could then use the scheduler to create a Future[Result] which you'd feed into an Action.async block.
Now, you don't really need to implement your own scheduler since Play depends on Akka for its concurrency; and Akka has a scheduler which will do the job.
import scala.concurrent.{Promise}
import scala.concurrent.duration._
import play.libs.Akka
val system = Akka.system()
def delayedResponse = Action.async {
import system.dispatcher
val promise = Promise[Result]
system.scheduler.scheduleOnce(4000 milliseconds) {
promise.success(Ok("Sorry for the wait!"))
}
promise.future
}
I used
activator run
to start the server, that does not seem to pick up the threadpool profile. Using
activator start
does, and now the profile seems to be used. I now need to test if this solves my problem. Will also have a look at the async call.

Simpy 3.0.4, setting resource priority

I am having trouble with resource priority in simpy. Consider the following code:
import simpy
env = simpy.Environment()
res = simpy.PriorityResource(env, capacity = 1)
def go(id):
with res.request(priority = id) as req:
yield req
print id,res
env.process(go(3))
env.process(go(2))
env.process(go(4))
env.process(go(5))
env.process(go(1))
env.run()
Lower number means higher priority, so I should get 1,2,3,4,5. But instead i am getting 3,1,2,4,5. So the first output is wrong, after that its sorted!
Thanks in advance for your help.
This is correct. When "3" requests the resource, it is empty so it gets the
slot. The remaining processes have to queue and will get the resource in the
order 1, 2, 4, 5.
If you use the PreemptiveResource instead (like request(priority=id,
preempt=True)), 3 will still get the resource first but will be preempted by
2. 2 will then get preempted by 1. 2 and 3 would then have to request the
resource again to gain access to it.
Even I had the same problem where I was supposed to make a factory FIFO. At that time I assigned a reaction time to a part and made it to follow the previous part. That is only if the previous part had got into service of resource, I made the next part request. It solved the problem objectively but seemed like it slowed down the simulation little and also gave a rexn time to the part. It was basically a revamp of the factory process. But I would love to see a feature when the part doesn't have to request again.
Can it be done in the present version?