We are using nginx proxy_pass feature for bridging RESTful calls to a backend app, and use nginx web socket proxy for the same system at eh same time. Sometimes (guess when the system has no client request for a while) the nginx freezes any request till we restart it and then anything works well. What is the problem? DO I have to change keep-alive settings? I have turned off buffer and cache feature for proxy in nginx.conf.
I found the problem. By checking nginx error log and a bit a hackery sniff and guess, I found out that the web socket connections usually disconnect and reconnect (mobile devices) and the nginx peer tries to keep the connection alive, and then maximum connection limit reaches. I just decreased timeouts and increased max connections.
Related
i am having an application running inside a gateway,
this application is a coap-server coded using the libcoap library
the server is running perfectly fine, the ip:port is tested using different commands such as nmap , telnet and others, each time it shows that the port is open and the connection is a success.
My problem is that there's no response from the server, wireshark is showing that the requests are being re-transmitted until timeout.
After some research, i thought that the gateway doesn't support NAT loopback, so i tried sending requests from another connection (i used my phones 4G). I even disabled firewall on the gateway too, But no success either.
UPDATE:
after some digging, i managed to receive a response from the server, but only when using TCP connection, the UDP still sends requests until timeout,
from a logical point of view, what may be the problem here ?
note: UDP is a must in this application so i cant just ignore it.
At this point we are using long poll with keepalives on the tcp level.
If we pull the eth cable out of the client, client does not detect a dead connection until the keepalive counters finish what they are supposed to do.
But what happens on the nginx level ? When and how will nginx detect the same dead connection? How long will the connection be open from the nginx side ?
Should we use keepalive from nginx to client as well ?
Thank you!
We are using haproxy for thrift(rpc) server load balancing in tcp mode. But we've encountered one problem when backend server restarts.
When our thrift(rpc) server restarts, it first stop listening on the port which haproxy is configured to connect, but will still process running requests until they are all done(graceful restart).
So during restarting period, there are still connected sockets made from client to backend server via haproxy while backend server is not accepting any new connections, but haproxy still treats this backend server as healthy, and will dispatch new connections to this server. Any new connections dispatched to this server will take quite a long time to connect, and then timeout.
Is there any way to notify haproxy that server has stop listening and not to dispatch any connection to it?
I've tried following:
timeout connect set to very low + redispatch + retry 3
option tcp-check
Both not solve the problem.
we’ve got a strange little problem we’re experiencing for months now:
The load on our cluster (http, long lasting keepalive connections with a lot of very short (<100ms) requests) is distributed very uneven.
All servers are configured the same way but some connections that push through thousands of requests per second just end up being sent to only one server.
We tried both load balancing strategies but that does not help.
It seems to be strictly keepalive related.
The misbehaving backend has the following settings:
option tcpka
option http-pretend-keepalive
Is the option http-server-close made to cover that issue?
If I get it right it will close and re-open a lot of connections which means load to the systems? Isn't there a way to keep the connections open but evenly balance the traffic anyway?
I tried to enable that option but it kills all of our backends when under load.
HAProxy currently only support keep-alive HTTP-connections toward the client, not the server. If you want to be able to inspect (and balance) each HTTP request, you currently have to use one of the following options
# enable keepalive to the client
option http-server-close
# or
# disable keepalive completely
option httpclose
The option http-pretend-keepalive doesn't change the actual behavior of HAProxy in regards of connection handling. Instead, it is intended as a workaround for servers which don't work well when they see a non-keepalive connection (as is generated by HAProxy to the backend server).
Support for keep-alive towards the backend server is scheduled to be in the final HAProxy 1.5 release. But the actual scope of that might still vary and the final release date is sometime in the future...
Just FYI, it's present in the latest release 1.5-dev20 (but take the fixes with it, as it shipped with a few regressions).
I am trying to send(copy) all the nginx traffic to Unix Socket.
Here is the related code from my nginx.conf
upstream unixsocket { server unix:/var/www/tmp2.sock; }
post_action /sendLogging
location /sendLogging
{
proxy_pass http://unixsocket;
}
should i start a server in this socket ?
socket -sl /var/www/tmp2.sock
If i do this, i cant see any requests coming to the socket.
Moreover because of the config, my nginx is taking huge CPU 50-90% Just while testing with ONE request.
--
EDIT:
My mistake, the sock file was not writable by NGINX worker process. Gave appropriate permissions.
The reason for high CPU usage was because of the POST_ACTION tag there was an internal redirect.
If anyone else faces INTERNAL REDIRECT issue with POST_ACTION i solved by returning 444 from the location. this works in my case.
Thanks.