Set request timeout at connection pool level using hackney and HTTPoison - sockets

I am using httpoison which uses Hackney under the hood to make HTTP requests.
By default, Hackney uses a default connection pool which is created with - connection timeout - 8000 ms, request timeout - 5000 ms. These numbers are too small for our project.
I have created a connection pool with different connection timeouts.
:hackney_pool.child_spec(:connection_pool, [timeout: 15_000, max_connections: 50])
and setting request timeout for each request like this
HTTPoison.get("httpbin.org/get", [], hackney: [recv_timeout: 15_000])
This does not look efficient to me as I need to put this timeout on every request.
I want to do set the recv_timeout at pool level eg:
:hackney_pool.child_spec(:connection_pool, [timeout: 15_000, recv_timeout: 15_000, max_connections: 50])
so when it creates a new connection, it can set the timeout at the connection level. My doubt is Can I specify the request timeout at the connection level? or is it just a request level timeout and HTTP does not support it at the connection level?.

Related

Haproxy agent-check DRAIN(agent) status still accepts new connections

I am trying to perform a load balancing for my Node.js application which uses websockets. I need haproxy to stop load balance new connections on a server, which has reached its maximum number of connections, keeping the existing ones intact in the same time.
I do this by performing an agent-check for each of my servers. If a server cannot accept new connections it responds with "drain" to an agent-check. If a server is able to respond to a new connection, it responds with "ready" to an agent-check.
Here is my haproxy.cfg configuration file:
global
daemon
maxconn 240000
log /dev/log local0 debug
log /dev/log local1 notice
tune.ssl.default-dh-param 2048
defaults
mode http
log global
option httplog
option dontlognull
option dontlog-normal
option http-server-close
option redispatch
timeout connect 20000ms
timeout http-request 1m
timeout client 2100000ms
timeout server 2100000ms
timeout queue 30s
timeout check 5s
timeout http-keep-alive 180s
timeout tunnel 3600s
timeout tarpit 60s
frontend stats
bind *:8084
stats enable
stats uri /stats
stats refresh 10s
stats admin if TRUE
frontend test
mode http
bind *:5000
default_backend ws
backend ws
mode http
fullconn 100000
balance roundrobin
cookie SERVERID insert indirect nocache
server 1 backend1:9999 check agent-check agent-port 8080 cookie 1 inter 500 fall 1 rise 2
And here is how I respond to haproxy agent-check in my Node.js app:
const healthCheckServer = net.createServer((c) => {
let data = '';
if (currentConn < MAX_CONN) {
data += 'ready';
} else {
data += 'drain';
}
c.write(data + '\r\n');
c.destroy();
});
healthCheckServer.listen(8080, '0.0.0.0');
When number of connections to my application is reaching its maximum, haproxy correctly changes server status to DRAIN (agent) (I can observe this in haproxy web dashboard). The problem is, that new connections are still accepted by the application.
I am new to haproxy, so can someone point me to where I am wrong?
Found out that when server is drained by agent (status is set to DRAIN (agent)) and if server is the only one existing in the backend it will still accept new connections.
When there are multiple servers present and each server is drained the behavior is just like expected: haproxy returns HTTP 503.
UPDATE:
Turned out that I was looking in the wrong direction all the time.
First I had to mark my backend which processes WebSocket connections as a non-http (remove mode http line). My guess is haproxy incorrectly counts current sessions for http backend when using WebSockets. Removing mode http solved my problem.
Second, returning maxconn:<conn> in agent-check looks like much simple and more idiomatic way to limit the number of concurrent connections.
Sources:
https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#5.2-agent-check
https://www.haproxy.com/blog/websockets-load-balancing-with-haproxy/
I hope this will help someone.

What is the difference between idle-timeout and request timeout in akka http configuration?

I went to the documentation and found out these
# The time after which an idle connection will be automatically closed.
# Set to infinite to completely disable idle connection timeouts.
idle-timeout = 10 s
# Defines the default time period within which the application has to
# produce an HttpResponse for any given HttpRequest it received.
# The timeout begins to run when the *end* of the request has been
# received, so even potentially long uploads can have a short timeout.
# Set to `infinite` to completely disable request timeout checking.
#
# If this setting is not `infinite` the HTTP server layer attaches a
# `Timeout-Access` header to the request, which enables programmatic
# customization of the timeout period and timeout response for each
# request individually.
request-timeout = 20 s
I have a scenario where my server takes more than 10 seconds to process a response but before sending the HTTPResponse the TCP connection between the client and server is timed out because of idle timeout.
Although the connection is idle at the moment but the request is still processing.
I thought this was the responsibility of response timeout?
Can anyone in this context explain me the difference between idle-timeout and response-timeout?
Documentation is a bit confusing, I have run the experiments based on that :
idle-timeout: It is the max time a connection can sit idle. It behaves the same as the Request timeout. Example :
idle-timeout = 1 s
Application has sent a request to 3rd party API and connection established, But 3rd party is not responding back. Then you will get a Timeout exception.
"akka.stream.scaladsl.TcpIdleTimeoutException"
connecting-timeout: 500 ms. It indicates the maximum time (500 ms) in which an Http connection must be established.
From the documentation and more details here
# The idle timeout for an open connection after which it will be closed
# Set to null or "infinite" to disable the timeout, but notice that this
# is not encouraged since timeout are important mechanisms to protect your
# servers from malicious attacks or programming mistakes.
idleTimeout = 75 seconds
If looks like you are setting the idle timeout lower than the request timeout, so it takes precedence. Your idle timeout settings should be longer than the request timeout, so for each request, the request timeout decides when to close the connection.

sendReceive's Timeouts versus `spray.can.*` Config Settings

The spray-can docs comment on the spray.can.server.* and spray.can.client.* config settings:
spray.can.server {
# The time after which an idle connection will be automatically closed.
# Set to `infinite` to completely disable idle connection timeouts.
idle-timeout = 60 s
# If a request hasn't been responded to after the time period set here
# a `spray.http.Timedout` message will be sent to the timeout handler.
# Set to `infinite` to completely disable request timeouts.
request-timeout = 20 s
spray.can.client {
# The time after which an idle connection will be automatically closed.
# Set to `infinite` to completely disable idle timeouts.
idle-timeout = 60 s
# The max time period that a client connection will be waiting for a response
# before triggering a request timeout. The timer for this logic is not started
# until the connection is actually in a state to receive the response, which
# may be quite some time after the request has been received from the
# application!
# There are two main reasons to delay the start of the request timeout timer:
# 1. On the host-level API with pipelining disabled:
# If the request cannot be sent immediately because all connections are
# currently busy with earlier requests it has to be queued until a
# connection becomes available.
# 2. With pipelining enabled:
# The request timeout timer starts only once the response for the
# preceding request on the connection has arrived.
# Set to `infinite` to completely disable request timeouts.
request-timeout = 20 s
However, I also see that there's an implicit Timeout that can be passed to sendReceive.
Does setting any of the above config values result in a clash/override of the implicit Timeout? In short, I'm trying to understand the distinction between sendReceive's implicit time-out and the above config values.

Understanding HTTP 504 Result from Spray Web Service

Given:
client <-- HTTP --> spray web service <-- HTTP --> other web service
The client receives the following HTTP status code and entity body when hitting the spray web service:
504 Gateway Timeout - Empty
Per this answer, my understanding is that the spray web service is not receiving a timely response from the other web service, and thus sending an HTTP 504 to the client.
Assuming that's reasonable, per https://github.com/spray/spray/blob/master/spray-http/src/main/scala/spray/http/StatusCode.scala#L158, I'm guessing that one of these server config values is responsible for the 504 in the spray web service's HTTP response to the client.
What config or implicit would cause the spray web service to reply with a 504 to the client?
I think you are using the default Spray timeouts and perhaps you will need to increase them. If this is the case, there are 2 values you will have to configure to increase the timeouts.
idle-timeout: Time you can have an idle connection to your server before it is disconnected (default 60s).
request-timeout: Time a request from your client (from your server to another) can be idle before it timesout (default 20s).
The first value must be always higher than the second, as the idle-timeout will make pointless the connections from your request client.
So just overwrite your configuration in your application.conf like this:
spray.can {
server {
idle-timeout = 120 s
request-timeout = 20 s
}
}

Connection reset by tomcat server on continuous reception of HTTP GET request

I am doing load test of web server. Current i am using tomcat 6 to test my code. While running the server resets the connection after few minutes on receiving continuous GET requests for the same page. If I send GET request with some gap (say 500 ms) then it works fine. If I send GET request with 10 ms or less than 10 ms then server resets the connection after few seconds from the start of test. Please help on how to fix this problem. What is the reason for reset ? Whether the server is overloaded or I have to perform some operation while establish connection ??.
My GET request format is:
GET /index.html HTTP/1.1
Host: 180.168.40.40
Connection: keep-alive