Request-response life cycle in Play(Scala) 2.4.X - scala

Few days back, I faced issue where client was receiving response from play application after 20 seconds. I have new relic set on production server which keeps telling about RPM, average response time, CPU and memory usage, etc. As per new relic response time was not exceeding 500 milli-seconds, but I verified that client was receiving response after 20 seconds. To dig out more I added logs in that tells about time required to serve request in play application. I added logs Filter as per following:
val noCache = Filter { (next, rh) =>
val startTime = System.currentTimeMillis
next(rh).map { result =>
val requestTime = System.currentTimeMillis - startTime
Logger.warn(s"${rh.method} ${rh.uri} took ${requestTime}ms and returned ${result.header.status}")
result.withHeaders(
PRAGMA -> "no-cache",
CACHE_CONTROL -> "no-cache, no-store, must-revalidate, max-age=0",
EXPIRES -> serverTime
)
}
}
private def serverTime = {
val calendar = Calendar.getInstance()
val dateFormat = new SimpleDateFormat(
"EEE, dd MMM yyyy HH:mm:ss z")
dateFormat.setTimeZone(calendar.getTimeZone)
dateFormat.format(calendar.getTime())
}
During my load test, I sent around 3K concurrent requests to play-app and captured TCPDUMP for all requests. Following are my observations:
As per play-application-log, max time play app took to response was 68 milli seconds.
As per TCPDUMP max time required to response any request was around 10 seconds.
As per new relic max response time was around 84 milli-seconds(as this is very close to logs I added, we can ignore this one)
As far as I know Filter is one of the last stage in request-response life cycle. So if logs in Filter says that request needed 68 milli-seconds and TCPDUMP claims that response was sent after 10 seconds then what caused delay in responding the request?
I understand that in multi-threading environment there is possibility of context switch after particular statement execution. But context switch should not cause this much delay. As per new relic there were less than 50 threads during this load test.
Can someone explain what can cause this? You are welcome to provide deep insights in request-response life cycle.

I was able to fix above issue by increasing FD limit. FD was causing late response.

Related

Close RESTEasy client after a certain delay

I'm trying to close a RESTEasy client after a certain delay (e.g 5 seconds) and it seems the current configuration I'm using is not working at all.
HttpClient httpClient = HttpClientBuilder.create()
.setConnectionTimeToLive(5, TimeUnit.SECONDS)
.setDefaultRequestConfig(RequestConfig.custom()
.setConnectionRequestTimeout(5 * 1000)
.setConnectTimeout(5 * 1000)
.setSocketTimeout(5 * 1000).build())
.build();
ApacheHttpClient43Engine engine = new ApacheHttpClient43Engine(httpClient, localContext);
ResteasyClient client = new ResteasyClientBuilder().httpEngine(engine).build();
according to the documentation the ConnectionTimeToLive should close the connection no matter if there's payload or not.
please find attached the link
https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.3/html-single/developing_web_services_applications/index#jax_rs_client
In my specific case, there sometimes is some latency and the payload is sent in chunks (below the socketTimeout interval hence the connection is kept alive and it could happen that the client is active for hours)
My main goal is to kill the client and release the connection but I feel there is something I'm missing in the configuration.
I'm using wiremock to replicate this specific scenario by sending the payload in chucks.
.withChunkedDribbleDelay
any clue about the configuration?
You may try using .withFixedDelay(60000) instead of .withChunkedDribbleDelay().

Play Framework ws libray stream method stops after 2 minutes on connection

I am using Play 2.6 and The twitter streaming API.
Below is how I connect to twitter using the ws library's stream() method.
The problem is that, the steam always stops after exactly 2 minutes. I tried different topics and the behavior is pretty consistent.
It seems there is a setting but I could not find where.
I am not sure it's on the play side or twitter side.
Any help is greatly appreciated.
ws.url("https://stream.twitter.com/1.1/statuses/filter.json")
.sign(OAuthCalculator(ConsumerKey(credentials._1, credentials._2), RequestToken(credentials._3, credentials._4)))
.withQueryStringParameters("track" -> topic)
.withMethod("POST")
.stream()
.map {
response => response.bodyAsSource.map(t=> {t.utf8String})
}
Play WS has default request timeout which is exactly 2 minutes by default.
Here is link to the docs:
https://www.playframework.com/documentation/2.6.x/ScalaWS#Configuring-Timeouts
So you can put in your application.conf line like
play.ws.timeout.request = 10 minutes
to specify default timeout for your all requests.
Also you can specify timeout for single request using withRequestTimeout method of WSRequest builder
/**
* Sets the maximum time you expect the request to take.
* Use Duration.Inf to set an infinite request timeout.
* Warning: a stream consumption will be interrupted when this time is reached unless Duration.Inf is set.
*/
def withRequestTimeout(timeout: Duration): WSRequest
So to disable request timeout for a sigle request you can use following code
ws.url(someurl)
.withMethod("GET")
.withRequestTimeout(Duration.Inf)

Constantly increasing response times with akka http client

I'm using the akka http client (version 10.0.0) to make requests to an endpoint served by a PHP Yii framework based application. The following code is executed every time a request is to be made:
val importConfimMsg = new ImportConfirmMessage(msg.orderId, msg.shipmentDate)
val uri = Uri(config.getString("endpoint.url"))
.withQuery(Query("call_id" -> config.getString("endpoint.call_id"),
"cmd" -> config.getString("endpoint.cmd")
))
val request = HttpRequest(method = POST, uri = uri, entity = importConfimMsg)
val result = http.singleRequest(request)
.map(r => r.entity.dataBytes.runWith(Sink.ignore))
The first few requests receive responses in a matter of milliseconds but as the application continues to run and send more requests I'm seeing response times rising steadily to several seconds, then to tens of seconds and eventually timing out at the one minute mark.
Is my implementation incorrect?

How timeout works in Dispatch

At API there is:
val http = Http.configure(_
.setConnectionTimeoutInMs(1)
)
What for is this config? I use it with:
.setMaxRequestRetry(0)
I fought I will get failed future after timeout. Future I create like that:
val f = http(u OK as.String)
f.map {
NotificationClientConnectionParams.parseFromString
}
But instead of failure I get success long after my timeout.
How it should work?
My test looks like this:
val startTime = java.time.LocalTime.now()
val f = TcpUtil2.registerClientViaDispatch(ClientHeaders("12345", "123456789"))
f onSuccess {
case c =>
println(s"Success: $c")
println(java.time.Duration.between(startTime, java.time.LocalTime.now()).toMillis)
}
f onFailure {
case e =>
println(s"failure:${e.getMessage}")
}
Thread.sleep(2000)
Response time is in hundreds of milliseconds and I got success. Is it a bug of dispatch?
An HTTP roundtrip goes through several phases (overly simplified):
establishing connection
connection established
sending request payload
request payload sent
waiting for response payload
receiving response payload
response payload received
From what I understand you measure the time between states 1 and 7.
setConnectionTimeoutInMs comes from async-http-client which is used by Dispatch internally. Here's an excerpt from its documentation:
Set the maximum time in millisecond an AsyncHttpClient can wait when connecting to a remote host
Thus, this method sets the maximum time the client will wait between states 1 and 2.
There's also setRequestTimeoutInMs:
Set the maximum time in millisecond an AsyncHttpClient wait for a response
This method seems to set the time between states 5 and 6 (or 7, I'm not sure which one).
So here's what's probably happening in your case. You connect to remote host, the server accepts the connection very quickly (the time between 1 and 2 is small), so your Future doesn't get failed. Then there are several options: either server takes a lot of time to prepare the response before it starts sending it back to you (the time between 5 and 6), or the response is very big so it takes a lot of time to deliver it to you (the time between 6 and 7), or both. But since you don't set the request timeout, your Future is not getting failed because of this.

How to measure INVITE to 200 OK ring duration in OpenSips?

OpenSips provides various timeout for configuration: http://www.opensips.org/html/docs/modules/1.8.x/tm.html
How to measure the time (ring duration) between receiving an INVITE and 200 OK? Is there a special function?
I was able to solve this using the $Ts core variable.
i) Record the initial timestamp:
$dlg_val(inviteStartTimestamp) = $Ts;
ii) When 200 OK is received in the reply route, find the time difference in seconds:
$var(ringDurationSec) = $Ts - $dlg_val(inviteStartTimestamp{s.int});