I am using haProxy (Debian) to distribute websocket connections through 4 servers.
+-----> webSocket 1
|
+-----> webSocket 2
public websocket ip --+
+-----> webSocket 3
|
+-----> webSocket 4
Without interrupting public websocket ip nor webSocket server, is it possible to instruct haProxy to stop forwarding connections to a specific server ? If so, how to resume then ?
+-----> webSocket 1
|
+-----> webSocket 2
public websocket ip --+
+--x--> webSocket 3
|
+-----> webSocket 4
Or .. since I am limiting connections to 50 per webSocket, is it possible to ask haProxy to start webSocket instances (it's a NODE JS script) when required ? And when not required, is it going to stop the service to free up memory ?
You can make it stop sending new connections to one of the backend servers by using the admin socket and sending the enable/disable server commands to the socket. Also you could use to "set server state drain" command to the same socket to allow the current websocket sessions to finish and only then completely disable that backend.
Regarding the second thing, haproxy does not know how to do that. You can create an utility to read the current state of the system in haproxy and do this job.
Related
Basically I know how browsers are attaching different port to each TCP connection by choosing free ephemeral port and therefore connection is unique, however I don't know how it looks like on TCP level when two backend services connect to each other. Is that similar to how browsers work?
For example let's say I'm sending request from some http client to 'Service A' that is running on 'thread-per-connection' server and listening on port 'X'. Within choosen endpoint I am also sending http request to 'Service B' that listens on port 'Y' (similar service or database), how will it start unique TCP connection between these two services, do 'Service A' acts simlilarly to how browsers handle that?
The outside HTTP client application is acting as a client to Service A. So that app will use an ephemeral port when making that 1st connection.
Service A then acts as a client to Service B. So Service A will use an ephemeral port when making that 2nd connection.
---------- ------------- -------------
| client | ----> | service A | --------> | service B |
---------- ------------- -------------
^ ^ ^ ^
| | | |
x.x.x.x:e1 y.y.y.y:X y.y.y.y:e2 z.z.z.z:Y
What you describe is common to all TCP connection, including HTTP. The party creating the connection ("client") picks an ephemeral port (it is actually picked by the OS, not by the application) when connecting to a party accepting the connection ("server").
Note that the terms "client" and "server" might be confusing since they are used with several meanings. A "server" is often a hardware which provides services. It can be the service application itself which accepts connections. But it can also be the role in the communication, i.e. the client is the one initiating the connection and the server is the one accepting it. In your case a Service A which is a server application acts in the role of the client when initiating a TCP connection to Service B.
I've Azure VM running Linux(ubuntu 18.06). I'm running Python socket server there. Now the problem is, any socket client which is not doing any activity for 4 minutes is getting disconnected. I've gone through https://github.com/wbuchwalter/azure-content/blob/master/includes/guidance-tcp-session-timeout-include.md and changed /etc/sysctl.conf on my linux instance, but it's not working. Now my question is,
1. Is it possible to change keepalive with default public ip of azre vm, because the link says "outbound using SNAT (Source NAT). This timeout is set to 4 minutes, and cannot be adjusted."
Inbound TCP timeout for Public IP can be controlled. For outbound, the default value is 4 minutes and cannot be changed. You an still keep the session active by sending keep-alive packets.
I am trying implement the following setup
HA -|
|- Redis1
|- Redis2
At any time only one of the redis instances should serve the incoming requests.
Going by the documentation, it seems that you can disable a server dynamically and HA would stop directing the traffic to the disabled server.
While this worked for new client connections, existing client connections are still served content from the disabled server.
But if I kill the redis instance, even the existing client connections are redirected to the other instance.
Is it possible to achieve this behavior without killing the instance?
Heres my HA config:
global
stats socket /opt/haproxy/admin.sock mode 660 level admin
stats socket ipv4#*:19999 level admin
defaults
log global
mode tcp
listen myproxy
mode tcp
bind *:4444
balance roundrobin
server redis1 127.0.0.1:6379 check
server redis2 127.0.0.1:7379 check
Found the answer. Need to add the following directives:
on-marked-down shutdown-sessions
This closes any existing sessions. Eg:
server redis1 127.0.0.1:6379 check on-marked-down shutdown-sessions
server redis2 127.0.0.1:7379 check on-marked-down shutdown-sessions
I am using haproxy 1.6.4 as TCP(not HTTP) proxy.
My clients are making TCP requests. They do not wait for any response, they just send the data and close the connection.
How haproxy behaves when all back-end nodes are down?
I see that (from the client point of view) haproxy is accepting incomming connections.
Haproxy statistics show that front-end has status OPEN, he is accepting connections.
Number of sessions and bytes-in increases for frontend, but not for back-end (he is DOWN).
Is haproxy buffering incoming TCP requests, and will pass them to the back-end once back-end is up?
If yes, it is possible to configure this buffer size? Where data is buffered (in memory, disk?)
Is this possible to turn off front-end (do not accept incoming TCP connections) when all back-end nodes are DOWN?
Edit:
when backend started, I see that
* backend in-bytes and sessions is equal to front-end number of sessions
* but my one and only back-end node has fever number of bytes-in, fever sessions and has errors.
So, it seems that in default configuration there is no tcp buffering.
Data is accepted by haproxy even if all backend nodes are down, but this data is lost.
I would prefer to turn off tcp front-end when there are no backend servers- so client connections would be rejected. Is that possible?
edit:
haproxy log is
Jul 15 10:02:32 172.17.0.2 haproxy[1]: 185.130.180.3:11319
[15/Jul/2016:10:02:32.335] tcp-in app/ -1/-1/0 0 SC \0/0/0/0/0
0/0 908
my log format is
%ci:%cp\ [%t]\ %ft\ %b/%s\ %Tw/%Tc/%Tt\ %B\ %ts\ \%ac/%fc/%bc/%sc/%rc\ %sq/%bq\ %U
What I understand from log:
there are no backeend servers
termination state SC translates to
S : the TCP session was unexpectedly aborted by the server, or the
server explicitly refused it.
C : the proxy was waiting for the CONNECTION to establish on the
server. The server might at most have noticed a connection attempt.
I don't think what you are looking for is possible. HAproxy handles the two sides of the connection (frontend, backend) separately. The incoming TCP connection is established first and then HAproxy looks for a matching destination for it.
We are using haproxy for thrift(rpc) server load balancing in tcp mode. But we've encountered one problem when backend server restarts.
When our thrift(rpc) server restarts, it first stop listening on the port which haproxy is configured to connect, but will still process running requests until they are all done(graceful restart).
So during restarting period, there are still connected sockets made from client to backend server via haproxy while backend server is not accepting any new connections, but haproxy still treats this backend server as healthy, and will dispatch new connections to this server. Any new connections dispatched to this server will take quite a long time to connect, and then timeout.
Is there any way to notify haproxy that server has stop listening and not to dispatch any connection to it?
I've tried following:
timeout connect set to very low + redispatch + retry 3
option tcp-check
Both not solve the problem.