Guzzle difference between 'connect_timeout' and 'timeout' - guzzle

What's the difference between 'connect_timeout' and 'timeout' request options in Guzzle.

The most basic way I can explain this is (from what I understand):
connect_timeout - Time Guzzle will wait to establish a connection to the server
timeout - Time Guzzle will wait, once a connection has been made, for the server to handle the request. For example waiting on a long running script.
Also this answer relating to curl's timeouts is quite nice - https://unix.stackexchange.com/questions/94604/does-curl-have-a-timeout/94612
The flags that get used to define the timeouts, --connect-timeout and --max-time, make the difference a lot clearer.
I also believe the Guzzle options are linked to these curl options
timeout - https://curl.haxx.se/libcurl/c/CURLOPT_TIMEOUT.html
connect_timeout -
https://curl.haxx.se/libcurl/c/CURLOPT_CONNECTTIMEOUT.html

Related

HAProxy: Random Layer4 timeouts in when using httpchk

I'm currently investigating an issue of random "Layer4" timeouts being reported by health checks used in HAPRoxy. The actual backend server being checked is proven to be up and responding at the time of these errors, as other trafic to the server is flowing through.
Thus making me suspect there might be issues caused by our configuration.
Server health is currently configured as follow:
option httpchk GET /health HTTP/1.1\r\nHost:\ Haproxy\r\nConnection:\ close
http-check expect string OK
server server1 server1.internal.example.com check check-ssl port 443 verify none inter 3s fall 2 backup
Trying to understand the docs, I see the "http-check connect" and "linger" options being mentioned. Would the "connect" directive make any actual difference to how the connection for the health check is set up compared to our current conf?
Any other feedback/observartions on the above config is welcome.

Looking for debugging advice on SSL errors from EKS using varnish

I know this is somewhat specific of a question, but I'm having a problem I can't seem to track down. I have a single pod deployed to EKS - the pod contains a python app, and a varnish reverse caching proxy. I'm serving chunked json (that is, streaming lines of json, a la http://jsonlines.org/), and it can be multiple GB of data.
The first time I make a request, and it hits the python server, everything acts correctly. It takes (much) longer than the cached version, but the entire set of json lines is downloaded. However, now that it's cached in varnish, if I use curl, I get:
curl: (56) GnuTLS recv error (-110): The TLS connection was non-properly terminated.
or
curl: (56) GnuTLS recv error (-9): A TLS packet with unexpected length was received.
The SSL is terminated at the ELB, and when I use curl from the proxy container itself (using curl http://localhost?....), there is no problem.
The hard part of this is that the problem is somewhat intermittent.
If there is any advice in terms of clever varnishlog usage, or anything of the same ilk on AWS, I'd be much obliged.
Thanks!
Because TLS is terminated on your ELB loadbalancers, the connection between should be in plain HTTP.
The error is probably not coming from Varnish, because Varnish currently doesn't handle TLS natively. I'm not sure if varnishlog can give you better insights in what is actually happening.
Checklist
The only checklist I can give you is the following:
Make sure the certificate you're using is valid
Make sure you're connecting to your target group over HTTP, not HTTPS
If you enable the PROXY protocol on your ELB, make sure Varnish has a -a listener that listens for PROXY protocol requests, on top of regular HTTP requests.
Debugging
Perform top-down debugging:
Increase the verbosity of your cURL calls and try to get more information about the error
Try accessing the logs of your ELB and get more details there
Get more information from your EKS logs
And finally, perform a varnislog -g request -q "ReqUrl eq '/your-url'" to get a full Varnishlog for a specific URL

Postgres's tcp_keepalives_idle Not Updating AWS ELB Idle Timeout

I have an Amazon ELB in front of Postgres. This is for Kubernetes-related reasons, see this question. I'm trying to work around the maximum AWS ELB Idle Timeout limit of 1 hour so I can have clients that can execute long-running transactions without being disconnected by the ELB. I have no control over the client configuration in my case, so any workaround needs to happen on the server side.
I've come across the tcp_keepalives_idle setting in Postgres, which in theory should get around this by sending periodic keepalive packets to the client, thus creating activity so the ELB doesn't think the client is idle.
I tried testing this by setting the idle timeout on the ELB to 2 minutes. I set tcp_keepalives_idle to 30 seconds, which should force the server to send the client a keepalive every 30 seconds. I then execute the following query through the load balancer: psql -h elb_dns_name.com -U my_user -c "select pg_sleep(140)". After 2 minutes, the ELB disconnects the client. Why are the keepalives not coming through to the client? Is there something with pg_sleep that might be blocking them? If so, is there a better way to simulate a long running query/transaction?
I fear this might be a deep dive and I may need to bring out tcpdump or similar tools. Unfortunately things do get a bit more complicated to parse with all of the k8s chatter going on as well. So before going down this route I thought it would be good to see if I was missing something obvious. If not, any tips on how to best determine whether a keepalive is actually being sent to the server, through the ELB, and ending up at the client would be much appreciated.
Update: I reached out to Amazon regarding this. Apparently idle is defined as not transferring data over the wire. Data is defined as any network packet that has a payload. Since TCP keep-alives do not have payloads, the client and server keep-alives are considered idle. So unless there's a way to get the server to send data inside of their keep alive payloads, or send data in some other form, this may be impossible.
Keepalives are sent on the TCP level, well below PostgreSQL, so it doesn't make a difference if the server is running a pg_sleep or something else.
Since a hosted database is somewhat of a black box, you could try to control the behavior on the client side. The fortunate thing is that PostgreSQL also offers keepalive parameters on the client side.
Experiment with
psql 'host=elb_dns_name.com user=my_user keepalives_idle=1800' -c 'select pg_sleep(140)'

What are HAProxy's http-request and http-response for?

I'm not quite clear about the options http-request or http-response in HAProxy configuration.
Many of the parms seem to be used for modification of http request and response but what I found is it can be done using the regular option as well
What's the difference between
http-request set-src hdr(x-forwarded-for) #and
option forwardfor
Also what's the difference between:
connect timeout 5000
client timeout 5000
server timeout 5000 #And
http-request timeout 5000
I'm new to haproxy and from the documentation is written from configuration parameter perspective (like a api reference) instead of a use-case perspective (like a user-guide).
So If I asked an absurd question please do not mind and answer kindly. Thanks
What's the difference?
These first two are sort of opposites.
Configure HAProxy to use the contents of the X-Forward-For header to establish its internal concept of the source address of the request, instead of the actual IP address initiating the inbound connection:
http-request set-src hdr(x-forwarded-for)
Take the IP address of the inbound connection and add an X-Forwarded-For header for the benefit of downstream servers:
option forwardfor
Also what's the difference?
Let's take this one backwards.
First, this isn't valid in any version that I'm aware of:
http-request timeout 5000
I believe you mean this...
timeout http-request 5000
...which sets the timeout for the client to send complete, valid HTTP headers and an extra \r\n signifying end of headers. This timer doesn't usually apply to the body, if there is one -- only the request headers. If this timer fires, the transaction is aborted, a 408 Request Timeout is returned, and the client connection is forcibly closed. This timer stops once complete request headers have been received.
By default, this timeout only applies to the header part of the request,
and not to any data. As soon as the empty line is received, this timeout is
not used anymore.
http://cbonte.github.io/haproxy-dconv/1.6/configuration.html#4-timeout%20http-request
Note: http-request is something entirely different, and is used for manipulating the request during the request processing phase of the transaction lifecycle, before the request is sent to a back-end server.
Continuing, these aren't actually valid, either.
connect timeout 5000
client timeout 5000
server timeout 5000
It seems you've reversed the keywords. I believe you're thinking of these:
timeout connect 5000
That's the maximum time to wait for the back-end to accept our TCP connection by completing its share of the 3-way handshake. It has no correlation with timeout http-request, which is only timing the client sending the initial request. If this timer fires, the proxy will abort the transaction and return 503 Service Unavailable.
timeout client 5000
This one does overlap with timeout http-request, but not completely. If this timer is shorter than timeout http-request, the latter can never fire. This timer applies any time the server is expecting data from the client. The transaction aborts if this timer fires. I believe if this happens, the proxy just closes the connection.
timeout server 5000
This is time spent waiting for the server to send data. It also has no overlap with timeout http-request, since that window has already closed before this timer starts running. If we are waiting for the server to send data, and it is idle for this long, the transaction is aborted, a 504 Gateway Timeout error is by HAProxy, and the client connection is closed.
So, as you can see, the overlap here is actually pretty minimal between these three and timeout http-request.
You didn't really ask, but you'll find significant overlap between things like http-response set-header and rsp[i]rep, and http-request set-header and req[i]rep. The [req|rsp][i]rep keywords represent older functionality that is maintained for compatibility but largely obsoleted by the newer capabilities that have been introduced, and again, there's not as much overlap as there likely appears at first glance because the newer capabilities can do substantially more than the old.
I'm new to haproxy and from the documentation is written from configuration parameter perspective (like a api reference) instead of a use-case perspective (like a user-guide).
That seems like a fair point.

perl LWP: connection timeout different from request timeout

I'm using LWP::UserAgent to communicate with webservices on several servers; the servers are contacted one at a time. Each response might take up to 30 minutes to finish, so I set the LWP timeout to 30 minutes.
Unfortunately the same timeout also applies, if the server is not reachable at all (e.g. the webserver is down). So my application waits 30 minutes for a server, which is not running.
Is it feasable to set two seperate timeouts?
a short one, which waits for the connection to be established.
a longer one, which waits for the response, once the connection has been established.
The same timeout doesn't "also apply" if the server is not reachable. The timeout option works in a very specific way:
The request is aborted if no activity on the connection to the server is
observed for timeout seconds. This means that the time it takes for the
complete transaction and the request() method to actually return might be
longer.
As long as data is being passed, the timeout won't be triggered. You can use callback functions (see the REQUEST METHODS section of the docs) to check how long data transfer has been going on, and to exit entirely if desired.