How can I tune the SSL connection timeout in haproxy? - haproxy

It appears the default timeout for SSL handshake is around 2 or 3 seconds, and timed separately from the "connect" timeout setting.
Is there a way to tune this?
Thanks

Related

Understanding keepalive between client and cockroachdb with haproxy

We are facing a problem where our client lets name it A. Is attempting to connect DB server (Cockroach) name B load balanced via ha-proxy
A < -- > haproxy < -- > B
Now at every, while our client A is receiving Broken Pipe error.
But I'm not able to understand why?
Cockroach server already has the below default value i.e 60 seconds.
COCKROACH_SQL_TCP_KEEP_ALIVE ## which is enabled to send for 60 second
Plus our haproxy config has the following setting.
defaults
mode tcp
# Timeout values should be configured for your specific use.
# See: https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4-timeout%20connect
timeout connect 10s
timeout client 1m
timeout server 1m
# TCP keep-alive on client side. Server already enables them.
option clitcpka
option clitcpka
So what is causing the TCP connection to drop when the keepalive is enabled on every end.
Keepalive is what makes connections go away if one of the end points has died without closing the connection. Investigate in that direction.
The only time keepalive actually keeps the connection alive is in connection with an ill-configured firewall that drops idle connections.

ill effects of reducing TCP socket connection retries

I have a tcp client in embedded Linux device,to establish connection with the server while the device is in running mode.
We have a program mode, where all activities have to seize, as the system parameters will be changed.
The way I designed it was create a socket on boot and close the connection and reopen the connections after coming out of program mode.
My problem is the 'connect' , during the boot up is blocking for more than 2 minutes , and it keeps on increasing as time goes on making the system sluggish.
someone told me that, changing the 'tcp_syn_retries' will eventually reduce the hog time and I tried and found that it will reduce the blocking time to under '1 ms'
Can anyone tell me about the possible implications of this change?
Also, can you suggest me how to implement the connect in a non blocking mode ? because the one i tried didn't establish the connection.
Any comments / response will be helpful.
Edit: As TCP has a 3way handshake, this would reduce the number of SYNC requests to the TCP server during the TCP handshaking. As a result , connecting to the remote TCP servers on a slow or sluggish connection will not be reliable
This is the info i got out of from googling. how much is too much ? Any suggestions welcome.

HAProxy, "timeout tunnel" vs "timeout client/server"

When configuring HAProxy, what is the difference between setting "timeout tunnel" alone vs setting both "timeout client" and "timeout server", all to the same value?
Timeout client and timeout server applies respectively on the client side and server side from an HAProxy point of view. They mean the inactivity timeout on this part of the connection. They both apply in TCP and HTTP mode, that said, in HTTP timeout server also means "max time for server to generate an answer".
timeout tunnel applies only on HTTP mode, when HAProxy performs in tunnel mode or when a websocket is established. More information about timeout tunnel for websockets here (and a comparison with the timeout mentioned previously):
https://www.haproxy.com/blog/websockets-load-balancing-with-haproxy/

getsockopt: connection timed out

I rewrite my project from python tornado to go(use iris framework). The basic function tested ok. When I test under high concurrence.the app always stops a while and then comes out the errors:
(dial tcp 192.168.1.229:6543: getsockopt: connection timed out)
the 6543 port is the postgresql port used with pgbouncer...the pgbouncer and postgresl process runs Ok.
Also, I find that the memcache connect time out sometimes(the memcache process is still working).
Does this happen because too many connections? Or some connections not
closed on time?
How can I avoid this problem?
Check your PgBouncer config. Try to increase max_client_conn option. Then experiment with concurrency level and requests count during stress test. Another possible issue can be that you don't return connections to pool.

Where can I find the default TCP connection keep alive timeout values for Chrome?

From this URL HTTP Keep Alive connection timeouts it says Chrome 13 has at least a 300 second connection timeout. That is if a request to the same server is not made within that timeout the connection will be closed. It also says that every 45 seconds Chrome will send a TCP keep-alive packet until the 300 second timeout expires. This is to avoid NAT/firewall that start dropping connections earlier.
Is there any settings in Chrome that I can check to verify these values? Can anyone point me to any documentation on what the values are on the latest version of Chrome?