PostgreSQL frontend unexpectedly closes connection - postgresql

I'm a little bit confused with the following case.
I've got a Postgres server running on host A, and a java based client running on host B. The client uses org.postgresql.Driver JDBC driver (version 9.1-901.jdbc3).
sometimes while executing long running stored procedure I get exception "java.net.SocketException: Socket closed". I'm using org.apache.commons.dbcp.BasicDataSource for retrieving
connections.
DBCP pool is configured with default options.
I got tcp dump in order to figure out on which side (client or server) socket is being closed;
Here is what I've got:
1. Client B sends a test query message when tries to borrow connection from dbcp pool ("Select 1")
2. Server A sends successful response back (Type: Command completion, Ready for query)
3. Client B sends ACK message in response on server A response (see the item 2).
4. Client B sends query message to the server A.
5. Server A sends ACK message in response on client Query message (see the item 4).
6. Client B sends terminating message (Type : Termination) after some time passed (from 3 to 10 or sometimes even more minutes).
7 Client B sends FIN ACK message to the server.
8. Server A sends back ACK on termination message.
9. Server A sends ACK on (FIN, ACK) message (item 7).
10. Server A sends back a response on the client query (from item 4) Type: Row description Columns: 40.
11. Client B sends RST message (reset).
12. Server A continues sending response on the query Type: Data row Length: 438 Columns 40 and so on.
13 Client B sends RST message (reset) again.
14. Server A continues sending response on the query Type: Data row Length: 438 Columns 40 and so on.
15. Client B sends RST message (reset).
After that communication seems to be finished.
After the item 6, in my client logs I got Exception like the following:
Caused by: java.net.SocketException: Socket closed
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:152)
at java.net.SocketInputStream.read(SocketInputStream.java:122)
at org.postgresql.core.VisibleBufferedInputStream.readMore(VisibleBufferedInputStream.java:145)
at org.postgresql.core.VisibleBufferedInputStream.ensureBytes(VisibleBufferedInputStream.java:114)
at org.postgresql.core.VisibleBufferedInputStream.read(VisibleBufferedInputStream.java:73)
at org.postgresql.core.PGStream.ReceiveChar(PGStream.java:274)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1661)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:257)
Could you please help me to figure out the reason of such a failure. (This bug happens once per 10 successful cases.)

We had a similar problem, and it was caused by a firewall or connection tracking router between the server and the client.
I am guessing you took the tcpdump on the server side. The query runs for a considerable time with no traffic on the connection. The firewall has a timer on the open connection; it expires and the firewall closes the connection towards the server, and also back towards the client. On the capture at the server side, it looks like the client is closing the connection.
You could verify this by capturing on the client side simultaneously as you capture on the server side - on the client side it will look like the server has closed the connection, while on the server side it looks like the server is closing the connection. In reality the firewall is closing it in both directions.
To prevent this, you can set tcp_keepalives_idle, tcp_keepalives_interval and/or tcp_keepalives_count (if your OS supports TCP Keepalives). Alternatively, you will have to change the settings on the firewall.

Related

ZMQ: Message gets lost in Dealer Router Dealer pattern implementation

I have a working setup where multiple clients send messages to multiple servers. Each message target only one server. The client knows the ids of all possible servers and only sends the messages if such server is actually connected. Each server on startup connects to the socked. There are multiple server workers which bind to inproc router socket. The communication is initiated from client always. The messages are sent asynchronously to each server.
This is achieved using DEALER->ROUTER->DEALER pattern. My problem is that when the number of client & server workers increase, the "ack" sent by server to client (Step # 7 below) is never delivered to client. Thus, the client is stuck waiting for acknowledgement whereas the server is waiting for more messages from client. Both the systems hang and never come out of this condition unless restarted. Details of configuration and communication flow are mentioned below.
I've checked system logs and nothing evident is coming out of it. Any help or guidance to triage this further will be helpful.
At startup, the client connects to the socket to its IP: Port, as a dealer.
"requester, _ := zmq.NewSocket(zmq.DEALER)".
The dealers connect to Broker. The broker connects frontend (client workers) to backend (server workers). Frontend is bound to TCP socket while the backend is bound as inproc.
// Frontend dealer workers
frontend, _ := zmq.NewSocket(zmq.DEALER)
defer frontend.Close()
// For workers local to the broker
backend, _ := zmq.NewSocket(zmq.DEALER)
defer backend.Close()
// Frontend should always use TCP
frontend.Bind("tcp://*:5559")
// Backend should always use inproc
backend.Bind("inproc://backend")
// Initialize Broker to transfer messages
poller := zmq.NewPoller()
poller.Add(frontend, zmq.POLLIN)
poller.Add(backend, zmq.POLLIN)
// Switching messages between sockets
for {
sockets, _ := poller.Poll(-1)
for _, socket := range sockets {
switch s := socket.Socket; s {
case frontend:
for {
msg, _ := s.RecvMessage(0)
workerID := findWorker(msg[0]) // Get server workerID from message for which it is intended
log.Println("Forwarding Message:", msg[1], "From Client: ", msg[0], "To Worker: ")
if more, _ := s.GetRcvmore(); more {
backend.SendMessage(workerID, msg, zmq.SNDMORE)
} else {
backend.SendMessage(workerID, msg)
break
}
}
case backend:
for {
msg, _ := s.RecvMessage(0)
// Register new workers as they come and go
fmt.Println("Message from backend worker: ", msg)
clientID := findClient(msg[0]) // Get client workerID from message for which it is intended
log.Println("Returning Message:", msg[1], "From Worker: ", msg[0], "To Client: ", clientID)
frontend.SendMessage(clientID, msg, zmq.SNDMORE)
}
}
}
}
Once the connection is established,
The client sends a set of messages on frontend socket. The messages contain metadata about the all the messages to be followed
requester.SendMessage(msg)
Once these messages are sent, then client waits for acknowledgement from the server
reply, _ := requester.RecvMessage(0)
The router transfers these messages from frontend to backend workers based on logic defined above
The backend dealers process these messages & respond back over backend socket asking for more messages
The Broker then transfers message from backend inproc to frontend socket
The client processes this message and sends required messsages to the server. The messages are sent as a group (batch) asynchronously
Server receives and processes all of the messages sent by client
After processing all the messages, the server sends an "ack" back to the client to confirm all the messages are received
Once all the messages are sent by client and processed by server, the server sends a final message indicating all the transfer is complete.
The communication ends here
This works great when there is a limited set of workers and messages transferred. The implementation has multiple dealers (clients) sending message to a router. Router in turn sends these messages to another set of dealers (servers) which process the respective messages. Each message contains the Client & Server Worker IDs for identification.
We have configured following limits for the send & receive queues.
Broker HWM: 10000
Dealer HWM: 1000
Broker Linger Limit: 0
Some more findings:
This issue is prominent when server processing (step 7 above) takes more than 10 minutes of time.
The client and server are running in different machines both are Ubuntu-20LTS with ZMQ version 4.3.2
Environment
libzmq version (commit hash if unreleased): 4.3.2
OS: Ubuntu 20LTS
Eventually, it turned out to be configuring Heartbeat for zmq sockets. Referred documentation here http://api.zeromq.org/4-2:zmq-setsockopt
Configured following parameters
ZMQ_HANDSHAKE_IVL: Set maximum handshake interval
ZMQ_HEARTBEAT_IVL: Set interval between sending ZMTP heartbeats
ZMQ_HEARTBEAT_TIMEOUT: Set timeout for ZMTP heartbeats
Configure the above parameters appropriately to ensure that there is a constant check between the client and server dealers. Thus even if one is delayed processing, the other one doesn't timeout abruptly.

Telit 4G modem LE920-EUG, giving error on http commands, AT#HTTPCFG.. AT#HTTPQRY any http command not working

I have the Telit LE920-EUG 4G LTE module. I am trying to execute GET and POST http requests to a remote server. Though the PDP context is activating properly and I have internet access on the SIM that I'm using, I can't seem to be able to connect to a remote server and execute HTTP requests (both POST and GET) from the module.
I have tried two ways, one through direct HTTP commands supported by the module(All commands mentioned in the LE9x0 AT command reference guide), the commands sequence for which is mentioned below, but +CME ERROR: 100 occurs, and it's same for every http command(AT#HHTPQRY, AT#HTTPRCV) that I try to execute.
AT#SGACT=1,1
#SGACT: 31.81.208.1
OK
AT#HTTPCFG=0,"httpbin.org",80,0,,,0,120,1
+CME ERROR: 100
//No configuration details
AT#HTTPCFG?
+CME ERROR: 100
AT#HTTPCFG=?
+CME ERROR: 100
I have also tried the GET and POST commands after socket dialing. The socket connects but they are not receiving any data from the server or posting anything onto the server, the connection closes with a NO CARRIER. The command sequence that I'm using is given below
//Socket Dial
AT#SD=1,0,80,www.m2msupport.net
CONNECT
//GET commands sequence
GET /m2msupport/http_get_test.php HTTP/1.1
Host:www.m2msupport.net
Connection:keep-alive
//Connection closes with No Response
NO CARRIER
//Socket info shows the bytes sent
at#si=1
#SI: 1,86,0,0,0
OK

Connection reset by tomcat server on continuous reception of HTTP GET request

I am doing load test of web server. Current i am using tomcat 6 to test my code. While running the server resets the connection after few minutes on receiving continuous GET requests for the same page. If I send GET request with some gap (say 500 ms) then it works fine. If I send GET request with 10 ms or less than 10 ms then server resets the connection after few seconds from the start of test. Please help on how to fix this problem. What is the reason for reset ? Whether the server is overloaded or I have to perform some operation while establish connection ??.
My GET request format is:
GET /index.html HTTP/1.1
Host: 180.168.40.40
Connection: keep-alive

socket programming for bad network

client:
socket(), connect() and then
for (1 to 1024) {
write(1024 bytes)
}
exit(0);
server:
socket(), bind(), listen()
while (1) {
accept()
while((n = read()) {
if (n == -1) abort(); /* never happended */
total_read += n
}
close()
}
now, client runs on Mac under NAT and server runs on my VPS (abroad)
generally, it works fine (client send all data and exit & server recv all data)
however, when client is running but suddenly the network is broken for couple minutes(and regain), the client won't exit after a long long time... I kill it with control + C and run it again, the server seems not read the data any more (client is still running)
here is what netstat shows:
client:
tcp4 0 130312 192.168.1.254.58573 A.B.C.D.8888 ESTABLISHED
server:
tcp 0 0 A.B.C.D:8888 a.b.c.d:54566 ESTABLISHED 10970/a.out
tcp 102136 0 A.B.C.D:8888 a.b.c.d:60916 ESTABLISHED -
A.B.C.D is my VPS address
a.b.c.d is my public client address
my quesiton is:
1, why ?
2, server will works fine after restarting, how to write code to get rid of it without restarting ?
In TCP, there's no way to tell that a connection has failed unless you try to send something on the connection. TCP doesn't perform active monitoring of the connection (actually, there are optional "keepalive" packets, but these are not normally sent until the connection has been idle for a couple of hours). When you send something, you'll eventually get an error if there's a timeout waiting for the other machine to return an acknowledgement. But if you're just reading data without sending, you can't tell that the connection has failed -- it just looks like the sender doesn't have anything to send.
You can resolve this by designing your application so that the client is required to send something every N seconds. Then set a timer in the server that detects that you haven't received anything for more than N seconds (you should add a little extra time to allow for transient delays).
When the network is broken what happens is that you clients keep sending data and at some point the socket send buffer gets full (I understand from what you show that you are sending 1024 Bytes, 1024 times, 1MB in total). The default for send buffer could be 16KB (surely less than 1MB). Then when the client tries to write, it gets blocked forever.
BTW, now I'm answering your question I don't know whether eventually after a number of TCP timeouts, TCP gives up and closes the socket making the socket interface return with error. I think that's not happening ... :) - So, connect fails if there is a problem in the network but write and read do not fail.
In the server side, the server gets blocked in read because it never receives the EOF.
Solution:
In the client side use non-blocking sockets, if the network is broken, at some point write will return with error EWOULDBLOCK. Then you will realize the send buffer is full for some reason. At that point, you could clouse the connection and try to connect again. If the network is broken, you will receive an error.
In the server side also use non-blocking sockets and select() function with a timeout. After a few timeouts you may decide there is a problem with the new connection and close it.

FIN,ACK after PSH,ACK

I'm trying to implement a communication between a legacy system and a Linux system but I constantly get one of the following scenarios:
(The legacy system is server, the Linux is client)
Function recv(2) returns 0 (the peer has performed an orderly shutdown.)
> SYN
< SYN, ACK
> ACK
< PSH, ACK (the data)
> FIN, ACK
< ACK
> RST
< FIN, ACK
> RST
> RST
Function connect(2) returns -1 (error)
> SYN
< RST, ACK
When the server have send its data, the client should answer with data, but instead I get a "FIN, ACK"
Why is it like this? How should I interpret this? I'm not that familiar with TCP at this level
When the server have send its data, the client should answer with data, but I instead get a "FIN, ACK" Why is it like this? How should I interpret this?
It could be that once the server has sent the data (line 4) the client closes the socket or terminates prematurely and the operating system closes its socket and sends FIN (line 5). The server replies to FIN with ACK but the client has ceased to exist already and its operating system responds with RST. (I would expect the client OS to silently ignore and discard any TCP segments arriving for a closed connection during the notorious TIME-WAIT state, but that doesn't happen for some reason.)
http://en.wikipedia.org/wiki/Transmission_Control_Protocol#Connection_termination:
Some host TCP stacks may implement a half-duplex close sequence, as Linux or HP-UX do. If such a host actively closes a connection but still has not read all the incoming data the stack already received from the link, this host sends a RST instead of a FIN (Section 4.2.2.13 in RFC 1122). This allows a TCP application to be sure the remote application has read all the data the former sent—waiting the FIN from the remote side, when it actively closes the connection. However, the remote TCP stack cannot distinguish between a Connection Aborting RST and this Data Loss RST. Both cause the remote stack to throw away all the data it received, but that the application still didn't read
After FIN, PSH, ACK --> One transaction completed
Second request receiving but sending [RST] seq=140 win=0 len=0