java.util.concurrent.TimeoutException: Request timed out to in gatling - scala

Hi i'm running concurrent users 200 over 200 seconds, when I execute same script after 2-3 sets I'm getting this error do i need to do some settings in gatling for example shareConnections in conf file or its because server is not able to respond to more request.
class LoginandLogout extends Simulation {
val scn = scenario("LoginandLogout")
.exec(Login.open_login)
.pause(Constants.SHORT_PAUSE)
.exec(CommonSteps.cscc_logout_page)
setUp(scn.inject(rampUsers(200) over (200 seconds))).protocols(CommonSteps.httpProtocol)
}
I'm using gatling 2.0.0-RC5 scala 2.10.2

Why blame the messenger? If you have a request timeout, that's your SUT's fault. Load testing is not about trying to tweak the injector to get the best possible figures, but to find out possible performance issues. You've just found one.
Using shareConnections makes sense when you're trying to simulated Web API clients (like a program calling a SOAP or RESTful webservice). It doesn't if you're trying to simulate web browsers.
I'm using gatling 2.0.0-RC5 scala 2.10.2
You really should upgrade! Just check the release notes since then, if you're not convinced.

Related

Flutter, test failing when running together but passing when running indivudally

I've been working on flutter unit testing on my API but I encountered a problem.
I've mocked an HTTP client to test all the POST, GET, DELETE, and PATCH requests. The problem is that when I run all my tests together, some tests on POST and PATCH requests fail (often after a test on a POST or PATCH request that passes) but when I run them individually, they pass.
The error I face is: Bad state: Cannot call when within a stub response
If anyone has encountered a similar problem and found a solution I would gladly take it :)
Thanks in advance for your time on my problem, I wish you all a good day!

Handling concurrent requests with Warp

I have made a Wai web application and it is being run using Warp. However I have one AJAX request that takes quite some time to finish, while that request is pending, no other requests will be accepted by the server. I thought Warp was capable of handling concurrent requests. Am I missing something? The way I run Warp is just by calling run port app where run is imported via import Network.Wai.Handler.Warp (run) and app is my Wai application.
I was trying Happstack Lite to see whether it would solve my problem, and there the -threaded flag was used when compiling the web application, which also solved my concurrent requests problem in the Warp application. I was under the assumption that GHC would have threading support by default, but apparently this have to be specified explicitly during compilation.

Running actor manually?

I'm working on my first Play Framework 2 application. I want to call a web service every once in a while and store data in the database so I've started writing an actor that is scheduled to every hour.
Problem is, I'm wasting a lot of time simply waiting for the job to be triggered (even if I've scheduled to be ran every minute while I'm testing. I'd love to be able to start the import manually, simply to make sure it works.
I've tried using the scala console, but it doesn't automatically reload my code every time I save so I have to restart the console manually. I've considered wrapping the import process in a class and use unit testing and mocking but I'm looking for a quicker way, especially because I'm new to Play and Scala.
Any idea or suggestion?
Thanks!
How about writing a custom sbt task?
A simple way to write an sbt task that loads your application class path, so you can implement the behavior with a method call in your application code, can be found at sbt-tasks.
I'm assuming you are using the Akka scheduler inside the Actor to trigger a message to itself which then invokes web service. You can just send same message (ActorRef ! Message) to the actor while you are doing your testing.

Play2.1 (Scala) leaking memory in Server Sent Event (Enumerator)

I'm working on an application where I want the server to send events to my JS front-end to show money transactions as they are processed.
The way I implemented works as intended; however, after a couple of hours I notice the heap increasing and consequently killing our server.
I read somewhere that it could be caused by nginx not being configured correctly, but the keep-alive is set to just a few seconds. I also saw a couple of posts regarding memory leak issues when one attempts something like the following:
val newTransaction: Enumerator[String] = {
Enumerator.generateM[String] {
Promise.timeout(getNewTransactions, checkingInterval 500)
}
}
...
Ok.stream(newTransaction &> EventSource()).as("text/event-stream")
getNewTransactions is just a method that hits Redis to check if any new transaction arrived for an account.
I tried updating my project to use the newest Scala version 2.10.2, but it seems that it doesn't solve it. In the meantime, I am resorting to JavaScript polling until I can figure it out. Does anyone have any idea how I might solve this?

Organizing and analyzing logs in an asynchronous Scala web application

In the old days, when each request to a web application was handled by one thread, it was fairly easy to understand the logs. One could, for example, use a servlet filter to name the thread that was handling a request with some sort of request id. This request id then could be output in the logs. In this world, a simple grep was all it took to collect the log lines for a given request.
In my current position, I'm building web applications with Scala (we're using Scalatra but that isn't specifically relevant to my question). Each request creates a scala.concurrent.Future and is then parked until that future has completed. The important bit here is that the thread that actually handles the business logic is different from the thread that handled the request which is different (I think) from the thread that completes the request and so the context of that request is lost during processing. The business logic can log all it likes but it is hard to associate that logging with the specific request it relates to.
Now from the standpoint of supporting my web services in production, the old approach was great and I'd like to come up with something similar for my asynchronous services. I've been trying to come up with a way to do it but have come up empty. That is, I haven't come up with anything nearly as light weight as the old, name-the-thread model. Does the Stack Overflow crowd have any suggestions?
Thanks
As you have written, assign an id to each request, and pass that to the business logic function. You can also do this with implicit parameter, so your code won't be cluttered.
This should be possible with MDC logging available with SLF4j which uses Thread local storage to store the context of the each request.
Also you will have to create a MDC Context Propagating execution context, to move the context across threads.
This post describes it well:
http://code.hootsuite.com/logging-contextual-info-in-an-asynchronous-scala-application/