I´m using nginx 1.10.3.
What I´m trying to force is the following scenario:
1.2.3. Client <--> Server | TCP 3 way tcp handshake
Client --> Server | HTTP GET
Server --> Client | TCP ACK
Server --> Client | HTTP response
Server --> Client | TCP RST, ACK
I try to provoke sending an RST packet after responding the http get request.
For this purpose I set the "lingering_close off" configuration parameter in the nginx_conf file, but without success. Is there another way to provoke this kind of scenario?
Recently we also meet similar scenario: there are many 'Broken pipe' errors in our application error log.
And after analysis of tcp transport info, we found sometimes nginx will send a 'RST' immediately after it send FIN to upstream server. Final conclusion, the reason of this behavior we think is when the client close the connection to the nginx, the nginx will close the relative upstream connection and not wait the upstream to execute the rest task.
So according to the nginx document, we aad the config proxy_ignore_client_abort on to the nginx config file. Reference: http://nginx.org/
p.s. Our nginx version is 1.12.
Related
I use k3s + containerd to deploy my service. there are multi services running on a fat container which share the same network with host by set hostnetwork=true in the deployment definition. services on this container communicate with each other using HTTP. I found sometimes HTTP communication will be interrupt by strange RST right after connection established.
I want to know in which case will causing the first RST which interrupt the normal TCP connection ?
following are some tcp traffic captured
TCP handshake success, then server received the HTTP request and try to send response to client after process, but a RST right after handshake cause client socket to be closed.
wireshark packet captured
TCP handshake success, and server have not received the HTTP request, there also a RST cause client closed after handshake. after 15mins, server try to close this connection because timeout
wireshark packet captured
TCP handshake success, and server have't received the HTTP request. there also a RST cause client closed after handshake. after about 100s, client with the same port which reset by RST try to connect server, server respond a ACK to previous connection which causing a client RST.
wireshark packet captured
environments:
HOST OS: CENTOS7
CONTAINER OS: CENTOS7
server program: Python 2.7.5, eventlet 0.22.0
HTTP library: Python2.7 pycurl (libcurl 7.29.0) Error: Connection reset by peer
K3S (v1.23.3) + Containerd (1.5.9)
Kernel Version: 4.18.0
When I run haproxy with both TCP and HTTP health check on two backends it sends 503 Service Unavailable error and does not identify any of the backend server although they are up. Both the health checks work individually but when they both are setup together it sends error. Can these two health check work together in haproxy?
Technically, if the HTTP check fails a TCP connection then it will indicate the server is down. Therefore, you should not need both because theoretically the TCP check is built into the HTTP check.
I am a newbie to the implementation of TLS over TCP.
I am using winsock to send TCP packets to remote syslog server just like the example given here:
https://learn.microsoft.com/en-us/windows/win32/winsock/complete-client-code
Now I want to use TLS over TCP. I have configured rsyslog on my centOs machine(syslog server) according to these steps: https://www.golinuxcloud.com/secure-remote-logging-rsyslog-tls-certificate/
But above link states info about sending logs from one syslog server to another syslog server. I need to send logs from my application [cpp socket programming] to remote syslog server.
Can someone please help me, how should I achieve this? Do I need to store any certificate where my application is running or how should I make TLS over TCP from my application to remote syslog server.
I am using haproxy 1.6.4 as TCP(not HTTP) proxy.
My clients are making TCP requests. They do not wait for any response, they just send the data and close the connection.
How haproxy behaves when all back-end nodes are down?
I see that (from the client point of view) haproxy is accepting incomming connections.
Haproxy statistics show that front-end has status OPEN, he is accepting connections.
Number of sessions and bytes-in increases for frontend, but not for back-end (he is DOWN).
Is haproxy buffering incoming TCP requests, and will pass them to the back-end once back-end is up?
If yes, it is possible to configure this buffer size? Where data is buffered (in memory, disk?)
Is this possible to turn off front-end (do not accept incoming TCP connections) when all back-end nodes are DOWN?
Edit:
when backend started, I see that
* backend in-bytes and sessions is equal to front-end number of sessions
* but my one and only back-end node has fever number of bytes-in, fever sessions and has errors.
So, it seems that in default configuration there is no tcp buffering.
Data is accepted by haproxy even if all backend nodes are down, but this data is lost.
I would prefer to turn off tcp front-end when there are no backend servers- so client connections would be rejected. Is that possible?
edit:
haproxy log is
Jul 15 10:02:32 172.17.0.2 haproxy[1]: 185.130.180.3:11319
[15/Jul/2016:10:02:32.335] tcp-in app/ -1/-1/0 0 SC \0/0/0/0/0
0/0 908
my log format is
%ci:%cp\ [%t]\ %ft\ %b/%s\ %Tw/%Tc/%Tt\ %B\ %ts\ \%ac/%fc/%bc/%sc/%rc\ %sq/%bq\ %U
What I understand from log:
there are no backeend servers
termination state SC translates to
S : the TCP session was unexpectedly aborted by the server, or the
server explicitly refused it.
C : the proxy was waiting for the CONNECTION to establish on the
server. The server might at most have noticed a connection attempt.
I don't think what you are looking for is possible. HAproxy handles the two sides of the connection (frontend, backend) separately. The incoming TCP connection is established first and then HAproxy looks for a matching destination for it.
What mechanism does Couchbase Sync Gateway use for getting the database changes in the couchbase server.
Does it do a long Poll or create a websocket connection ?
Or does is frequently invokes Couchbase server REST API ? If so then which REST API and what are the queries that it sends in the HTTP request for that REST API?
Neither - it uses DCP (the same underlying protocol used by replication and XDCR) to subscribe to updates from Couchbase Server.
After some Research I found out the following points.
1) sync_gateway first establishes a tcp connection with the couchbase server to port 8091 and over that tcp connection it sends http GET request to invoke the REST API /pools ad /pools/default.
2) After that whenever there is a document change initiated by user , sync_gateway send tcp packet with data field asking for the user information and the document information that is being changed .
3) Now sync_gateway sends another TCP packet with revised revision of the packet and gets a response from couchbase server that document has been revised.
4) All these conversation happens using TCP PSH ACK packets. So there are no HTTP packets flowing . Only TCP servers communicating both sides.