What is the difference between idle-timeout and request timeout in akka http configuration? - scala

I went to the documentation and found out these
# The time after which an idle connection will be automatically closed.
# Set to infinite to completely disable idle connection timeouts.
idle-timeout = 10 s
# Defines the default time period within which the application has to
# produce an HttpResponse for any given HttpRequest it received.
# The timeout begins to run when the *end* of the request has been
# received, so even potentially long uploads can have a short timeout.
# Set to `infinite` to completely disable request timeout checking.
#
# If this setting is not `infinite` the HTTP server layer attaches a
# `Timeout-Access` header to the request, which enables programmatic
# customization of the timeout period and timeout response for each
# request individually.
request-timeout = 20 s
I have a scenario where my server takes more than 10 seconds to process a response but before sending the HTTPResponse the TCP connection between the client and server is timed out because of idle timeout.
Although the connection is idle at the moment but the request is still processing.
I thought this was the responsibility of response timeout?
Can anyone in this context explain me the difference between idle-timeout and response-timeout?

Documentation is a bit confusing, I have run the experiments based on that :
idle-timeout: It is the max time a connection can sit idle. It behaves the same as the Request timeout. Example :
idle-timeout = 1 s
Application has sent a request to 3rd party API and connection established, But 3rd party is not responding back. Then you will get a Timeout exception.
"akka.stream.scaladsl.TcpIdleTimeoutException"
connecting-timeout: 500 ms. It indicates the maximum time (500 ms) in which an Http connection must be established.

From the documentation and more details here
# The idle timeout for an open connection after which it will be closed
# Set to null or "infinite" to disable the timeout, but notice that this
# is not encouraged since timeout are important mechanisms to protect your
# servers from malicious attacks or programming mistakes.
idleTimeout = 75 seconds
If looks like you are setting the idle timeout lower than the request timeout, so it takes precedence. Your idle timeout settings should be longer than the request timeout, so for each request, the request timeout decides when to close the connection.

Related

Set request timeout at connection pool level using hackney and HTTPoison

I am using httpoison which uses Hackney under the hood to make HTTP requests.
By default, Hackney uses a default connection pool which is created with - connection timeout - 8000 ms, request timeout - 5000 ms. These numbers are too small for our project.
I have created a connection pool with different connection timeouts.
:hackney_pool.child_spec(:connection_pool, [timeout: 15_000, max_connections: 50])
and setting request timeout for each request like this
HTTPoison.get("httpbin.org/get", [], hackney: [recv_timeout: 15_000])
This does not look efficient to me as I need to put this timeout on every request.
I want to do set the recv_timeout at pool level eg:
:hackney_pool.child_spec(:connection_pool, [timeout: 15_000, recv_timeout: 15_000, max_connections: 50])
so when it creates a new connection, it can set the timeout at the connection level. My doubt is Can I specify the request timeout at the connection level? or is it just a request level timeout and HTTP does not support it at the connection level?.

How to get know, if the configuration is loaded

I am using akka http websocket client and would like to keep the client alive.
On the section Automatic keep-alive Ping support it says, to put the configuration into application.conf as the following:
akka.http.client.websocket.periodic-keep-alive-mode = pong
I've done as:
akka.http {
websocket {
# periodic keep alive may be implemented using by sending Ping frames
# upon which the other side is expected to reply with a Pong frame,
# or by sending a Pong frame, which serves as unidirectional heartbeat.
# Valid values:
# ping - default, for bi-directional ping/pong keep-alive heartbeating
# pong - for uni-directional pong keep-alive heartbeating
#
# See https://tools.ietf.org/html/rfc6455#section-5.5.2
# and https://tools.ietf.org/html/rfc6455#section-5.5.3 for more information
periodic-keep-alive-mode = ping
# Interval for sending periodic keep-alives
# The frame sent will be the onne configured in akka.http.server.websocket.periodic-keep-alive-mode
# `infinite` by default, or a duration that is the max idle interval after which an keep-alive frame should be sent
periodic-keep-alive-max-idle = infinite
}
}
How to figure out, if the configuration was taken or not?
I'm not sure I really understand your question, but you can show the complete configuration (including overrides etc.) when the actor system is loaded by setting akka.log-config-on-start to on.
Akka Docs on Logging the Configuration
related stackoverflow

sendReceive's Timeouts versus `spray.can.*` Config Settings

The spray-can docs comment on the spray.can.server.* and spray.can.client.* config settings:
spray.can.server {
# The time after which an idle connection will be automatically closed.
# Set to `infinite` to completely disable idle connection timeouts.
idle-timeout = 60 s
# If a request hasn't been responded to after the time period set here
# a `spray.http.Timedout` message will be sent to the timeout handler.
# Set to `infinite` to completely disable request timeouts.
request-timeout = 20 s
spray.can.client {
# The time after which an idle connection will be automatically closed.
# Set to `infinite` to completely disable idle timeouts.
idle-timeout = 60 s
# The max time period that a client connection will be waiting for a response
# before triggering a request timeout. The timer for this logic is not started
# until the connection is actually in a state to receive the response, which
# may be quite some time after the request has been received from the
# application!
# There are two main reasons to delay the start of the request timeout timer:
# 1. On the host-level API with pipelining disabled:
# If the request cannot be sent immediately because all connections are
# currently busy with earlier requests it has to be queued until a
# connection becomes available.
# 2. With pipelining enabled:
# The request timeout timer starts only once the response for the
# preceding request on the connection has arrived.
# Set to `infinite` to completely disable request timeouts.
request-timeout = 20 s
However, I also see that there's an implicit Timeout that can be passed to sendReceive.
Does setting any of the above config values result in a clash/override of the implicit Timeout? In short, I'm trying to understand the distinction between sendReceive's implicit time-out and the above config values.

Spray.can.server.request-timeout property has no effect

In my src/main/resources/application.conf I include:
spray.can.server {
request-timeout = 1s
}
In order to test this, in the Future which is servicing my request I put a Thread.sleep(10000).
When I issue a request, the server waits 10 seconds and responds with no hint of a timeout being sent to the client.
I am not overriding the timeout handler.
Why are my clients (chrome and curl) not receiving a timeout?
The configuration looks correct, so Spray request timeout should be working. One of the frequent reasons for it not working is that your config application.conf is not being used by the application.
The reasons for config being ignored could be that it's in the wrong place, not included in your classpath, or not included in a JAR that you package.
To troubleshoot first check that default Spray timeout is working. By default it's 20 sec. Make your code sleep for 30sec and see if you get timeouts triggered.
Check what's in your final config values by printing it. Set this in your conf:
akka {
# Log the complete configuration at INFO level when the actor system is started.
# This is useful when you are uncertain of what configuration is used.
log-config-on-start = on
}
Finally, keep in mind other timeouts like timeout-timeout = 2 s.
I think the request-timeout is for the http client, if the response is not returned before that value, the client will get a timeout from spray, see the spary doc
# If a request hasn't been responded to after the time period set here
# a `spray.http.Timedout` message will be sent to the timeout handler.
# Set to `infinite` to completely disable request timeouts.
For example, in my web browser, I can got a below message:
Ooops! The server was not able to produce a timely response to your request.
Please try again in a short while!
If the timeout is long enough, the browser will keep waiting until a response is returned

socket programming for bad network

client:
socket(), connect() and then
for (1 to 1024) {
write(1024 bytes)
}
exit(0);
server:
socket(), bind(), listen()
while (1) {
accept()
while((n = read()) {
if (n == -1) abort(); /* never happended */
total_read += n
}
close()
}
now, client runs on Mac under NAT and server runs on my VPS (abroad)
generally, it works fine (client send all data and exit & server recv all data)
however, when client is running but suddenly the network is broken for couple minutes(and regain), the client won't exit after a long long time... I kill it with control + C and run it again, the server seems not read the data any more (client is still running)
here is what netstat shows:
client:
tcp4 0 130312 192.168.1.254.58573 A.B.C.D.8888 ESTABLISHED
server:
tcp 0 0 A.B.C.D:8888 a.b.c.d:54566 ESTABLISHED 10970/a.out
tcp 102136 0 A.B.C.D:8888 a.b.c.d:60916 ESTABLISHED -
A.B.C.D is my VPS address
a.b.c.d is my public client address
my quesiton is:
1, why ?
2, server will works fine after restarting, how to write code to get rid of it without restarting ?
In TCP, there's no way to tell that a connection has failed unless you try to send something on the connection. TCP doesn't perform active monitoring of the connection (actually, there are optional "keepalive" packets, but these are not normally sent until the connection has been idle for a couple of hours). When you send something, you'll eventually get an error if there's a timeout waiting for the other machine to return an acknowledgement. But if you're just reading data without sending, you can't tell that the connection has failed -- it just looks like the sender doesn't have anything to send.
You can resolve this by designing your application so that the client is required to send something every N seconds. Then set a timer in the server that detects that you haven't received anything for more than N seconds (you should add a little extra time to allow for transient delays).
When the network is broken what happens is that you clients keep sending data and at some point the socket send buffer gets full (I understand from what you show that you are sending 1024 Bytes, 1024 times, 1MB in total). The default for send buffer could be 16KB (surely less than 1MB). Then when the client tries to write, it gets blocked forever.
BTW, now I'm answering your question I don't know whether eventually after a number of TCP timeouts, TCP gives up and closes the socket making the socket interface return with error. I think that's not happening ... :) - So, connect fails if there is a problem in the network but write and read do not fail.
In the server side, the server gets blocked in read because it never receives the EOF.
Solution:
In the client side use non-blocking sockets, if the network is broken, at some point write will return with error EWOULDBLOCK. Then you will realize the send buffer is full for some reason. At that point, you could clouse the connection and try to connect again. If the network is broken, you will receive an error.
In the server side also use non-blocking sockets and select() function with a timeout. After a few timeouts you may decide there is a problem with the new connection and close it.