IMAP-Copy: Client closes connection after 60 seconds - email

I have implemented an IMAP server and I am facing the following problem:
There are some mail clients (Apple) that close a connection after 60 seconds. When a COPY command is received with a large number of mails, this command takes longer than 60 seconds on the server side. After 60 seconds this mail client closes the connection (I have seen the FIN in the TCP stack) and when the server tries to reply with a SUCCESS, the client is already gone.
After some time the mail client sends the same command and the same thing happens again.
I already tried to send a tcp keepalive without success.
Has anyone an idea what to try next?

You should be able to send an untagged OK response at any time. This may work as a keep alive:
* OK Working on it...

Related

Delay in [SYN,ACK] packet by server to client by server

There is delay seen on server machine while sending [SYN,ACK] packet to Client machine for the first connection attempt from client. These are some observations analyzed with sniffer tool wireshark:-
Due to this delay:-
Client application is sending the [TCP Retransmission] packet to server.
Later, connection timeout expires(3 seconds) on client side and it tries second connection attempt with server.
Surprisingly, server immediately sends [SYN,ACK] packet for second connection attempt to client back.
After sending [SYN,ACK] packet for second attempt, server responds back with [SYN,ACK] packet for the first attempt.
For better understanding, client application sends the connection request certain set server ports all together. Server sends [SYN,ACK] packet from the listening port which is one of these ports.
I will be pleased if somebody explains :-
Why there is delay in [SYN,ACK] packet from server machine?
Why server able to respond back immediately with [SYN,ACK] packet for second attempt but responded for first connection attempt after sending [SYN,ACK] for first attempt.
Who takes care of responding back [SYN,ACK] packet to client machine? Is it server application or any other operating system service?
The screenshot of wireshark is attached here. The above mentioned observation is on the basis of frame#20145 to Frame#20428

tcp connection issue for unreachable server after connection

I am facing an issue with tcp connection..
I have a number of clients connected to the a remote server over tcp .
Now,If due to any issue i am not able to reach my server , after the successful establishment of the tcp connection , i do not receive any error on the client side .
On client end if i do netstat , it shows me that clients are connected the remote server , even though i am not able to ping the server.
So,now i am in the case where the server shows it is not connected to any client and on another end the client shows it is connected the server.
I have tested this for websocket also with node.js , but the same behavior persists over there also .
I have tried to google it around , but no luck .
Is there any standard solution for that ?
This is by design.
If two endpoints have a successful socket (TCP) connection between each other, but aren't sending any data, then the TCP state machines on both endpoints remains in the CONNECTED state.
Imagine if you had a shell connection open in a terminal window on your PC at work to a remote Unix machine across the Internet. You leave work that evening with the terminal window still logged in and at the shell prompt on the remote server.
Overnight, some router in between your PC and the remote computer goes out. Hours later, the router is fixed. You come into work the next day and start typing at the shell prompt. It's like the loss of connectivity never happened. How is this possible? Because neither socket on either endpoint had anything to send during the outage. Given that, there was no way that the TCP state machine was going to detect a connectivity failure - because no traffic was actually occurring. Now if you had tried to type something at the prompt during the outage, then the socket connection would eventually time out within a minute or two, and the terminal session would end.
One workaround is to to enable the SO_KEEPALIVE option on your socket. YMMV with this socket option - as this mode of TCP does not always send keep-alive messages at a rate in which you control.
A more common approach is to just have your socket send data periodically. Some protocols on top of TCP that I've worked with have their own notion of a "ping" message for this very purpose. That is, the client sends a "ping" message over the TCP socket every minute and the server responds back with "pong" or some equivalent. If neither side gets the expected ping/pong message within N minutes, then the connection, regardless of socket error state, is assumed to be dead. This approach of sending periodic messages also helps with NATs that tend to drop TCP connections for very quiet protocols when it doesn't observe traffic over a period of time.

TCP keepalive not working

The situation:
Postgres 9.1 on Debian Server
Scala(Java) application using the LISTEN/NOTIFY mechanism to get notified through JDBC
As there can be very long pauses (multipla days) between notifications I ran into the problem that the underlying TCP connection silently got terminated after some time and my application stopped to receive the notifications.
When googeling for a solution I found that there is a parameter tcpKeepAlive that you can set on the connection. So I set it to true and was happy. Until the next day I saw that again my connection was dead.
As I had been suspicious there was a wireshark capture running in parallel which now turns out to be very usefull. Just about exactly two hours after the last successfull communication on the connection of interest my application sends a keepalive packet to the database server. However the server responds with RST as it seems it has already closed the connection.
The net.ipv4.tcp_keepalive_time on the server is set to 7200 which is 2 hours.
Do I need to somehow enable keepalive on the server or increase the keepalive_time?
Is this the way to go about keeping my application connected?
TL;DR: My database connection gets terminated after long inactivity. Setting tcpKeepAlive didnt fix it as server responds with RST. What to do?
As Craig suggested in the comments the problem was very likely related to some piece of network hardware in between the server and the application. The fix was to increase the frequency of the keepalive messages.
In my case the OS was Windows where you have to create a Registry key with the idle time in milliseconds after which the message should be sent. Info on that here
I have set it to 15 minutes which seems to have solved the issue.
UPDATE:
It only seemed like it solved the issue. After about two days of program run time my connection was gone again. I switched to checking the validity my connection every time I use it. This does not seem like it is the solution but it is a solution nonetheless.

PHP - IRC "QUIT" not working properly

CHANGE:
I have determined the problem has nothing to do with the coding. However the problem stays still, as this appears to be caused by IRC, I'm still in search of the reason.
The server I'm connecting uses two kinds of PING requests:
One is asked upon connecting to server, and it's in format of alpha-numeric values of 8 characters.
Example: PING :EA0E9275.
And another one is after the server sends out MOTD, joins channels, completes "End of /NAMES list". Then after "n delay" server sends me a ping request with current connected host as it's value.
Example: PING :irc.ams.nl.euirc.net
If I send the command "QUIT :Quit Message" before I reply the host PING request, server ignores the QUIT message, and instead, it quits with a server-filled status message similiar to "Client Exited" message.
Example: ERROR :Closing Link: Nick[IP.ADD.RE.SS] (Life is too short...)
However if I send the same command after responding to host PING request, my QUIT gets processed as it should.
Example: ERROR :Closing Link: Nick[IP.ADD.RE.SS] (Quit: Quit Message)
I've checked in the RFC, and found this on QUIT section:
If, for some other reason, a client connection is closed without the client issuing a QUIT command (e.g. client dies and EOF occurs on socket), the server is required to fill in the quit message with some sort of message reflecting the nature of the event which caused it to happen.
Also, if still in need to see the partial code I'm using to accomplish this, you can check it here. However, this is a common issue with mIRC the IRC client.
Basic scheme
Connecting to server...
Connected!
Server waiting NICK/USER info...
Server received NICK/USER info, waiting for alphanumeric PING reply...
Server received alphanumeric PING reply, sending MOTD.
End of MOTD, sending JOIN to join channels...
Joined to channels, NAMES list for channels have been requested.
End of NAMES list.
Receiving active channel(s)/server data.
If sent QUIT command, server will ignore the usual QUIT, and will send "Closing Link" by server default as status quit(Life is too short...).
Server is doing an alive-check, received host PING(irc.ams.nl.euirc.net), server is waiting reply...
Sent server the reply.
If sent QUIT command, server will process QUIT command as user-level, the usual way, and will send "Closing Link" by user-specified message or empty.((QUIT: User Message) or (QUIT: ))
Let's review your code for a bit:
When I made my IRC bot, I explode()ed the string from the server into words (splitting by space), you can then refer to words:
if ($words[0] == "PING") { reply("PONG :" . $words[1]); }
Always PONG immediately after PING, and reply with the same message as the server.
If the second is fulfilled, you should never have that problem of the server waiting a PING, because you'll immediately answer. A client is expected to honor the PING PONG commands from the server, otherwise the server would think of it as offline (yes, even if you send anything else, the server expects PONG).

socket error 10054

I have a C/S program. Client use socket to send a file to server, after send approximate more than 700k data, client(on win7) will receive a socket 10054 error which means Connection reset by peer.
Server worked on CentOS 5.4, client is windows7 virtual machine run in virtual box. client and server communicate via a virtual network interface.
The command port(send log) is normal, but the data port(send file) have the problem.
If it was caused by wrong configuration of socket buffer size or something else?
If anyone can help me check the problem. Thanks.
Every time I call socket send a buffer equals 4096 byte
send(socket, buffer, 4096, 0 )
CentOS socket config.
#sysctl -a
...
net.ipv4.tcp_rmem = 4096 87380 4194304
net.ipv4.tcp_wmem = 4096 16384 4194304
net.ipv4.tcp_mem = 196608 262144 393216
net.ipv4.tcp_dsack = 1
net.ipv4.tcp_ecn = 0
net.ipv4.tcp_reordering = 3
net.ipv4.tcp_fack = 1
I'm not quite understand what the socket buffer configuration means, if this will cause the receive incomplete result problem?
It's almost definitely a bug in your code. Most likely, one side thinks the other side has timed out and so closes the connection abnormally. The most common way this happens it that you call a receive function to get data, but you actually already got that data and just didn't realize it. So you're waiting for data that you have already received and thus time out.
For example:
1) Client sends a message.
2) Client sends another message.
3) Server reads both messages but thinks it only got one, sends an acknowledge.
4) Client receives acknowledge, waits for second acknowledge which server will never send.
5) Server waits for second message which it actually already received.
Now the server is waiting for the client and the client is waiting for the server. The server was coded incorrectly and didn't realize that it actually got two messages in one go. TCP does not preserve message boundaries.
If you tell me more about your protocol, I can probably tell you in more detail what went wrong. What constitutes a message? Which side sends when? Are there any acknowledgements? And so on.
But the short version is that each side is probably waiting for the other.
Most likely, the connection reset by peer is a symptom. Your problem occurs, one side times out and aborts the connection. That causes the other side to get a connection reset because the other side aborted the connection.