How to enable http response buffering on HaProxy? - haproxy

I hope I understand it right: HAProxy is able (http-mode) to
buffer the output (response) from a backend server, so that the
server process is made free as soon as possible. A slow client
would therefore "only" consume ressources on the HAProxy.
How much data from the response can be buffered by HAProxy?
Is there a config variable? Is it tcp/ip buffering or
buffering in HAProxy userspace?

Related

How to make internal redirect in HAProxy?

Our stateful service holds session in operative memory. It takes about a minute to save session or get it from memory, so we use Redis to detect on which node the session is currently loaded to lock requests to other nodes.
But HAProxy sometimes switches sessions between nodes. When it happens, for example if session was on operative memory on first node and we are switching to the second node, the request waits, first node needs to save its state and the second one needs to restore it. While it happens, HAProxy probably thinks that the node is down, so HAProxy starts switching other requests, and the same process happens for other requests. We increased HAProxy waiting timeout but it didn't help. How can we make HAProxy switch this request and all forthcoming requests from specific session to specific node?
Something like
303
Location: 192.168.1.2
Okay so your application requires session stickiness.
When you have already cookies in place for session handling then I suggest to use also cookie stickiness within HAProxy.
In short here a config snipplet from this blog post https://www.haproxy.com/blog/load-balancing-affinity-persistence-sticky-sessions-what-you-need-to-know/
backend bk_web
balance roundrobin
cookie SERVERID insert indirect nocache
server s1 192.168.10.11:80 check cookie s1
server s2 192.168.10.21:80 check cookie s2
#Baptiste have a rather detailed explanation for the setup HAproxy 1.5.8 How do I configure Cookie based stickiness?
When you use more the one HAProxy servers can you sync the state via the peers protocol which is described in this blog post.
https://www.haproxy.com/blog/introduction-to-haproxy-stick-tables/

Does haproxy buffer tcp request body when backend is down?

I am using haproxy 1.6.4 as TCP(not HTTP) proxy.
My clients are making TCP requests. They do not wait for any response, they just send the data and close the connection.
How haproxy behaves when all back-end nodes are down?
I see that (from the client point of view) haproxy is accepting incomming connections.
Haproxy statistics show that front-end has status OPEN, he is accepting connections.
Number of sessions and bytes-in increases for frontend, but not for back-end (he is DOWN).
Is haproxy buffering incoming TCP requests, and will pass them to the back-end once back-end is up?
If yes, it is possible to configure this buffer size? Where data is buffered (in memory, disk?)
Is this possible to turn off front-end (do not accept incoming TCP connections) when all back-end nodes are DOWN?
Edit:
when backend started, I see that
* backend in-bytes and sessions is equal to front-end number of sessions
* but my one and only back-end node has fever number of bytes-in, fever sessions and has errors.
So, it seems that in default configuration there is no tcp buffering.
Data is accepted by haproxy even if all backend nodes are down, but this data is lost.
I would prefer to turn off tcp front-end when there are no backend servers- so client connections would be rejected. Is that possible?
edit:
haproxy log is
Jul 15 10:02:32 172.17.0.2 haproxy[1]: 185.130.180.3:11319
[15/Jul/2016:10:02:32.335] tcp-in app/ -1/-1/0 0 SC \0/0/0/0/0
0/0 908
my log format is
%ci:%cp\ [%t]\ %ft\ %b/%s\ %Tw/%Tc/%Tt\ %B\ %ts\ \%ac/%fc/%bc/%sc/%rc\ %sq/%bq\ %U
What I understand from log:
there are no backeend servers
termination state SC translates to
S : the TCP session was unexpectedly aborted by the server, or the
server explicitly refused it.
C : the proxy was waiting for the CONNECTION to establish on the
server. The server might at most have noticed a connection attempt.
I don't think what you are looking for is possible. HAproxy handles the two sides of the connection (frontend, backend) separately. The incoming TCP connection is established first and then HAproxy looks for a matching destination for it.

Nginx proxy hangs in a while when idle

We are using nginx proxy_pass feature for bridging RESTful calls to a backend app, and use nginx web socket proxy for the same system at eh same time. Sometimes (guess when the system has no client request for a while) the nginx freezes any request till we restart it and then anything works well. What is the problem? DO I have to change keep-alive settings? I have turned off buffer and cache feature for proxy in nginx.conf.
I found the problem. By checking nginx error log and a bit a hackery sniff and guess, I found out that the web socket connections usually disconnect and reconnect (mobile devices) and the nginx peer tries to keep the connection alive, and then maximum connection limit reaches. I just decreased timeouts and increased max connections.

Perl - creating multiple HTTP servers listening on different ports

An external application will send HTTP POST request to multiple HTTP/HTTPS servers (e.g. 10 HTTP servers). These HTTP servers may get almost same HTTP Post request. These HTTP servers will analyze the data and send 200 OK response if data validation pass.
I am having all these HTTP servers listening on single host with different ports.
Please suggest me some way to achieve it.
FYI - This request response between Application and HTTP Server(s) will happen only once and then HTTP server will be closed.
I am thinking to implement it using forking the HTTP:Daemon 10 times but looking forward for some light solution.
Also I am thinking to capture these data through a single interface rather then checking the data from all 10 individual 10 HTTP server.
for PORT in `seq 11111 11121` ; do plackup -Ilib --listen :$PORT app.psgi & done

How to bypass socket?

I have installed a streaming server "Lighttpd" (light-tpd) which runs on port 81.
I have a C program that listens to http requests on port 80 using a server socket created by socket api.
I want that as soon as I get a request on the port 80 from a client I forward that to the streaming server and the remaining conversation takes place b/w the Streaming Server and client & they bypass my C program completely.
The problem is client would be expecting msgs from socket at port 80 (i.e from the socket of my C program) since it had sent request to port 80 only rather than from the Streaming server which gives service on port 81.
can anyone help me out on this issue of bypassing the socket on port 80 for replying to the client.
Solution I think: my program can be a middle man...It will forward the request to port 81 of streaming server and when it get replies from there it forwards them to the client...but bypassing would be efficient and I don't know how to do that. Please help me out.
Thanks in advance
Why put your C program in front? Lighttpd is designed to act as a frontend proxy (among other uses), so you can put lighttpd in front and use its mod_proxy_core to pass requests to your C program. You can use X-Rewrite and/or X-Sendfile to pass requests back to Lighttpd after doing some processing inside your application.
I have recently implemented a similar technique where a single program accepts a TCP connection and then 'passes' that connection to another component and plays no further part in the socket conversation. It uses the technique of passing the file descriptor of the accepted socket over a UNIX socket to the server component which effectively does an inter-process dup() of the fd.
See here and here.
This works for me as I have control of both ends of the UNIX socket on the server-side, but to work for you, you'd need:
A UNIX socket between your dispatching component and server components.
Full control of the server component.
You might need to hack away at the lighttpd source code...
Sorry, not really an proper answer...