How to rate limit by HTTP status code with HAProxy? - haproxy

HAProxy provides a built-in http_err_rate counter which “reports the average HTTP request error rate over that period.” This can be used in a stick table to rate-limit clients that are generating a lot of errors. That might look something like this:
frontend web
tcp-request content reject if { src_get_gpc0(Abuse) gt 0 }
acl scanner src_http_err_rate(Abuse) ge 10
http-request deny if scanner flag_abuser
backend Abuse
stick-table type ip size 1m expire 60m store gpc0,http_err_rate(20s)
What I'd like to do is track something like the http_err_rate, but only for 401 Unauthorized status codes. That way HAProxy would only be concerned with rate-limiting unauthorized requests, rather than all HTTP error codes.
Thanks!

What I'd like to do is track something like the http_err_rate, but only for 401 Unauthorized status codes.
You can use the General Purpose Counters together with an ACL matching on the status fetch. The following example configuration will track the rate of 404 errors for a given IP address [1] and deny requests with the 429 status if a rate of 10 requests per 10 seconds is exceeded:
frontend fe_http
mode http
bind *:8080
stick-table type ipv6 size 10k expire 300s store gpc0_rate(10s)
http-request track-sc0 src
http-request deny deny_status 429 if { sc0_gpc0_rate gt 10 }
# Relevant line below
http-response sc-inc-gpc0(0) if { status 404 }
default_backend be_http
backend be_http
mode http
server example example.com:80
[1] Note: I recommend to use ipv6 as the stick table key, it may contain both IPv4 and IPv6 addresses.

If you want to rate limit depending on their rate of 401 you need to change the 429 code by 401 in your config:
http-request deny deny_status 401 if { sc_http_req_cnt(0) gt 10 }
With both deny and tarpit you can add the deny_status flag to set a
custom response code instead of the default 403/500 that they use out
of the box. For example using http-request deny deny_status 429 will
cause HAProxy to respond to the client with the error 429: Too Many
Requests.
For more "general" information about acls and rate-limiting, you can see:
https://www.haproxy.com/blog/four-examples-of-haproxy-rate-limiting/
https://www.haproxy.com/blog/introduction-to-haproxy-acls/

Related

Setup rate-limits based on hostname

I'm going through https://www.haproxy.com/blog/four-examples-of-haproxy-rate-limiting/ but unable to grok how to write a configuration which rate-limits based on the Host header. Does the following look alright?
frontend website
bind :80
stick-table type string size 100k expire 30s store http_req_rate(10s)
# What do I put here?
http-request track-sc0 request.header(Host)
# what does sc_http_req_rate(0) really mean?
http-request deny deny_status 429 if { sc_http_req_rate(0) gt 20 }
default_backend servers
Also, what is an easy way to validate whether a rate-limiting configuration works as intended? (not simply test the syntactical validity of the the config)
I figured it out:
stick-table type string size 100k expire 300s store http_req_rate(60s)
http-request track-sc0 hdr(Host)
http-request deny deny_status 429 if { sc_http_req_rate(0) gt 30 }

Implement a rate-limit relating to the healthy servers count using haproxy

I want to implement a rate-limit system using the sticky table of HAProxy. Consider that I have 100 servers, and a limit of 10 requests per server, the ACL would be
http-request track-sc0 int(1) table GlobalRequestsTracker
http-request deny deny_status 429 if { sc0_http_req_rate(GlobalRequestsTracker),div(100) gt 10 }
Now if I want to make this dynamic depending on the healthy servers count, I need to replace the hardcoded 100 per the nbsrv converter.
http-request track-sc0 int(1) table GlobalRequestsTracker
http-request deny deny_status 429 if { sc0_http_req_rate(GlobalRequestsTracker),div(nbsrv(MyBackend)) gt 10 }
But I'm getting the error:
error detected while parsing an 'http-request deny' condition : invalid args in converter 'div' : expects an integer or a variable name in ACL expression 'sc0_http_req_rate(GlobalRequestsTracker),div(nbsrv(MyBackend))'.
Is there a way to use nbsrv as a variable inside the div operator?
HAProxy does no allow for nested function calls as far as I know. But you could store the number of backend servers in a variable and use it in the division (see http-request set-var in the HAProxy documentation). I have not tested it or used personaly, but I guess it could look like:
frontend <fe>
http-request track-sc0 int(1) table <tbl>
http-request set-var(req.<var>) nbsrv(<be>)
http-request deny deny_status <code> if { sc0_http_req_rate(<tbl>),div(req.<var>) gt <val> }
See the HAProxy documentation.

HAProxy close front-end connection after N HTTP requests

I'm attempting to configure HAProxy to close a client TCP connection after it has been used to process N requests. My goal is to have our long-lived clients occasionally re-establish connections that are otherwise kept-alive by HTTP Keep-Alive.
Basically I'm trying to implement the equivalent of nginx's keepalive_requests (http://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_requests).
I currently have something like:
frontend https-in
bind *:443 ssl crt /etc/ssl/private/cert.pem
stick-table type binary len 32 size 1000 expire 75s store gpc0
acl close_connection sc0_get_gpc0 gt 3
acl exceeded_connection sc0_get_gpc0 gt 4
http-response set-header Connection Keep-Alive unless close_connection
http-response set-header Keep-Alive timeout=75\ max=3 unless close_connection
http-response set-header Connection Close if close_connection
timeout http-keep-alive 75s
timeout client 75s
tcp-request content track-sc0 ssl_fc_session_id
tcp-request content reject if exceeded_connection
http-request sc-inc-gpc0
default_backend https
backend https
option httpchk GET /health
server localhost 127.0.0.1:8080 maxconn 1000
But some problems with this include:
The SSL session ID may be re-used across connections
This ends abruptly closing a connection from a client when they've exceeded the threshold (assuming they ignore the Connection: Close)
Are there any recommended approaches for something like this? Ideally I would like to:
Track the counter based on individual TCP connection (not src to avoid the case where the same IP has multiple connections established)
Close the connection on the final response (at the same time I send Connection: Close)
But I haven't been able to track down ways to do either of those.
Thanks!
Edit
I was able to devise a better way to track unique TCP connections by creating hashing a tuple of the src,src_port,dst,dst_port:
http-request set-header X-Unique-Id %[src]:%[src_port]:%[dst]:%[dst_port]
http-request set-header X-Unique-Id-SHA %[req.fhdr(X-Unique-Id),sha1]
http-request track-sc0 req.fhdr(X-Unique-Id-SHA)
I'm not crazy about having to create the dummy headers, but this seems to work.

How to fix an improper request in HAProxy

We have several (100+) clients in the field with a bug in the HTTP request. The request was previously working when directly routed to our Windows Server, but now with it fails with HAProxy v1.7 in front of it.
Here is an example request:
GET /index.aspx HTTP/1.1 \nHost: host\n\n
There is an extra space after the HTTP version before the \n.
Here is a snapshot of the relevant config.
frontend http_port_80
bind :80
mode http
reqrep (.)\ HTTP/1.1\ (.*) \1\ HTTP/1.1\2
option forwardfor
option accept-invalid-http-request
stats enable
use_backend cert_update if is_updater
use_backend getConsoleHTTP if is_getconsole
default_backend schedule_server
I have tried rewriting the request to remove the extra space and set the option accept-invalid-http-request to address the issue, but we still receive the same error.
{
type: haproxy,
timestamp: 1506545591,
termination_state: PR-,
http_status:400,
http_request:,
http_version:,
remote_addr:192.168.1.1,
bytes_read:187,
upstream_addr:-,
backend_name:http_port_80,
retries:0,
bytes_uploaded:92,
upstream_response_time:-1,
upstream_connect_time:-1,
session_duration:2382,
termination_state:PR
}
Does anyone have any ideas of how to fix the malformed request prior to haproxy rejecting it?

Haproxy 503 Service Unavailable . No server is available to handle this request

How does haproxy deal with static file , like .css, .js, .jpeg ? When I use my configure file , my brower says :
503 Service Unavailable
No server is available to handle this request.
This my config :
global
daemon
group root
maxconn 4000
pidfile /var/run/haproxy.pid
user root
defaults
log global
option redispatch
maxconn 65535
contimeout 5000
clitimeout 50000
srvtimeout 50000
retries 3
log 127.0.0.1 local3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout check 10s
listen dashboard_cluster :8888
mode http
stats refresh 5s
balance roundrobin
option httpclose
option tcplog
#stats realm Haproxy \ statistic
acl url_static path_beg -i /static
acl url_static path_end -i .css .jpg .jpeg .gif .png .js
use_backend static_server if url_static
backend static_server
mode http
balance roundrobin
option httpclose
option tcplog
stats realm Haproxy \ statistic
server controller1 10.0.3.139:80 cookie controller1 check inter 2000 rise 2 fall 5
server controller2 10.0.3.113:80 cookie controller2 check inter 2000 rise 2 fall 5
Does my file wrong ? What should I do to solve this problem ? ths !
What I think is the cause:
There was no default_backend defined. 503 will be sent by HAProxy---this will appear as NOSRV in the logs.
Another Possible Cause
Based on one of my experiences, the HTTP 503 error I receive was due to my 2 bindings I have for the same IP and port x.x.x.x:80.
frontend test_fe
bind x.x.x.x:80
bind x.x.x.x:443 ssl blah
# more config here
frontend conflicting_fe
bind x.x.x.x:80
# more config here
Haproxy configuration check does not warn you about it and netstat doesn't show you 2 LISTEN entries, that's why it took a while to realize what's going on.
This can also happen if you have 2 haproxy services running. Please check the running processes and terminate the older one.
Try making the timers bigger and check that the server is reachable.
From the HAproxy docs:
It can happen from many reasons:
The status code is always 3-digit. The first digit indicates a general status :
- 1xx = informational message to be skipped (eg: 100, 101)
- 2xx = OK, content is following (eg: 200, 206)
- 3xx = OK, no content following (eg: 302, 304)
- 4xx = error caused by the client (eg: 401, 403, 404)
- 5xx = error caused by the server (eg: 500, 502, 503)
503 when no server was available to handle the request, or in response to
monitoring requests which match the "monitor fail" condition
When a server's maxconn is reached, connections are left pending in a queue
which may be server-specific or global to the backend. In order not to wait
indefinitely, a timeout is applied to requests pending in the queue. If the
timeout is reached, it is considered that the request will almost never be
served, so it is dropped and a 503 error is returned to the client.
if you see SC in the logs:
SC The server or an equipment between it and haproxy explicitly refused
the TCP connection (the proxy received a TCP RST or an ICMP message
in return). Under some circumstances, it can also be the network
stack telling the proxy that the server is unreachable (eg: no route,
or no ARP response on local network). When this happens in HTTP mode,
the status code is likely a 502 or 503 here.
Check ACLs, check timeouts... and check the logs, that's the most important...