Select and read sockets (Unix) - sockets

I have an intermittent problem with a telnet based server on Unix (the problem crops up on both AIX and Linux).
The server opens two sockets, one to a client telnet session, and one to a program running on the same machine as the server. The idea is that the data is passed through the server to and from this program.
The current setup has a loop using select to wait for a "read" file descriptor to become available, then uses select to wait for a "write" file descriptor to become available.
Then the program reads from the incoming file descriptor, then processes the data before writing to the outgoing descriptor.
The snippet below shows what is going on. The problem is that very occasionally the read fails, with errno being set to ECONNRESET or ETIMEDOUT. Neither of these are codes documented by read, so where are they coming from?
The real question is, how can I either stop this happening, or handle it gracefully?
Could doing two selects in a row be the problem?
The current handling behaviour is to shut down and restart. One point to note is that once this happens it normally happens three or four times, then clears up. The system load doesn't really seem to be that high (it's a big server).
if (select(8, &readset, NULL, NULL, NULL) < 0)
{
break;
}
if (select(8, NULL, &writeset, NULL, NULL) < 0)
{
break;
}
if (FD_ISSET(STDIN_FILENO, &readset)
&& FD_ISSET(fdout, &writeset))
{
if ((nread = read(STDIN_FILENO, buff, BUFFSIZE)) < 0)
{
/* This sometimes fails with errno =
ECONNRESET or ETIMEDOUT */
break;
}
}

Look at the comments in http://lxr.free-electrons.com/source/arch/mips/include/asm/errno.h on lines 85 and 98: these basically say there was a network connection reset or time out. Check and see if there are timeouts you can adjust on the remote network program, or send some periodic filler bytes to ensure that the connection stays awake consistently. You may just be victim of an error in the network transit path between the remote client and your local server (this happens to me when my DSL line hiccups).
EDIT: not sure what the downvote is for. The man page for read explicitly says:
Other errors may occur, depending on the object connected to fd.
The error is probably occuring in the select, not in the read: you're not checking errors after the select, you're just proceeding to the read, which will fail if the select returned an error. I'm betting if you check the errno value after the select call you'll see the errors: you don't need to wait for the read to see the errors.

Related

Boost asio connection trial takes too long

I have the following code:
...
namespace ba = boost::asio;
...
ba::ip::tcp::resolver resolver(ioService);
ba::ip::tcp::socket sock(ioService);
ba::ip::tcp::resolver::query query(the_ip, the_port);
ba::ip::tcp::resolver::iterator endpoint_iterator = resolver.resolve(query);
ba::ip::tcp::resolver::iterator end;
boost::system::error_code error = ba::error::host_not_found;
while (error && (endpoint_iterator != end))
{
sock.close();
sock.connect(*endpoint_iterator++, error);
}
When the server is up, it connects instantly and the rest of the code flows wonderfully, but when the server is offline or unreachable this while takes forever to exit.
Before the while: 09:04:37
When it exits: 09:05:40
Since I give query ONE IP and ONE PORT, shouldn't it exit instantly, given no connection can be established?
Are there any parameters I didn't configure? Something like "try X times" or "try for Y seconds"?
EDIT:
As seen in the comments, the problem seems to be a search on the internet, but the connection is local! (192.168...)
Why does it search the internet for a local IP?
As #MartinJames said on the comments:
"The internet is quite big, and has all sorts of links, some with very high latency.
Determining unreachability therefore takes a long time.
That is what happens when you connect all the computers on a planet together."
I tried disconnecting from the internet and the while exited instantly.
But the IP I'm using is local (192.168...), so this is still strange.
Thanks, man.

Using `chan pending output` instead of writable fileevent

Yo, I've written a server with a simple protocol: the client sends a line, the server sends a line back in response, repeat. To prevent a client from filling Tcl's output buffer by sending lots of lines but not accepting data back, can I just check chan pending output instead of using the writable fileevent?
proc respond {stream msg} {
if {[chan pending output $stream] <= 1024} {
puts $stream $msg
} else {
#close $stream
}
}
For output, chan pending output will correctly describe the number of bytes waiting in the output queue. Normally, that value will be bounded by the -buffersize value that you chan configure (or fconfigure) it to have.
That value will only be exceeded when the channel is non-blocking; with a blocking channel, when the value would go over it, instead there's a blocking write to the underlying device (socket, pipe, file, serial line, whatever) so by the time you could see that it went over, it's back under the limit again.
But if you're using non-blocking channels, you really should use chan event (or fileevent). Luckily for the actual writes, Tcl will actually do this for you automatically; the single most useful thing you could want from a writable event is already there. In practice, the most common actual use of a writable event is in detecting when an async socket connection becomes ready for service.
So what you are doing will work, but you'll have to think carefully about what to do if the output buffer is “getting full”; the idea that a message can need to be delayed is a place where a simple abstraction tends to become leaky. With 8.6's coroutines, you could (probably) do a transparent suspend or something like that, but getting that sort of thing right can take a little thought. (For example, a GUI client might need to show a busy indicator and put things into a state where the user can't enter more requests.)

Weird Winsock recv() slowdown

I'm writing a little VOIP app like Skype, which works quite good right now, but I've run into a very strange problem.
In one thread, I'm calling within a while(true) loop the winsock recv() function twice per run to get data from a socket.
The first call gets 2 bytes which will be casted into a (short) while the second call gets the rest of the message which looks like:
Complete Message: [2 Byte Header | Message, length determined by the 2Byte Header]
These packets are round about 49/sec which will be round about 3000bytes/sec.
The content of these packets is audio-data that gets converted into wave.
With ioctlsocket() I determine wether there is some data on the socket or not at each "message" I receive (2byte+data). If there's something on the socket right after I received a message within the while(true) loop of the thread, the message will be received, but thrown away to work against upstacking latency.
This concept works very well, but here's the problem:
While my VOIP program is running and when I parallely download (e.g. via browser) a file, there always gets too much data stacked on the socket, because while downloading, the recv() loop seems actually to slow down. This happens in every download/upload situation besides the actual voip up/download.
I don't know where this behaviour comes from, but when I actually cancel every up/download besides the voip traffic of my application, my apps works again perfectly.
If the program runs perfectly, the ioctlsocket() function writes 0 into the bytesLeft var, defined within the class where the receive function comes from.
Does somebody know where this comes from? I'll attach my receive function down below:
std::string D_SOCKETS::receive_message(){
recv(ClientSocket,(char*)&val,sizeof(val),MSG_WAITALL);
receivedBytes = recv(ClientSocket,buffer,val,MSG_WAITALL);
if (receivedBytes != val){
printf("SHORT: %d PAKET: %d ERROR: %d",val,receivedBytes,WSAGetLastError());
exit(128);
}
ioctlsocket(ClientSocket,FIONREAD,&bytesLeft);
cout<<"Bytes left on the Socket:"<<bytesLeft<<endl;
if(bytesLeft>20)
{
// message gets received, but ignored/thrown away to throw away
return std::string();
}
else
return std::string(buffer,receivedBytes);}
There is no need to use ioctlsocket() to discard data. That would indicate a bug in your protocol design. Assuming you are using TCP (you did not say), there should not be any left over data if your 2byte header is always accurate. After reading the 2byte header and then reading the specified number of bytes, the next bytes you receive after that constitute your next message and should not be discarded simply because it exists.
The fact that ioctlsocket() reports more bytes available means that you are receiving messages faster than you are reading them from the socket. Make your reading code run faster, don't throw away good data due to your slowness.
Your reading model is not efficient. Instead of reading 2 bytes, then X bytes, then 2 bytes, and so on, you should instead use a larger buffer to read more raw data from the socket at one time (use ioctlsocket() to know how many bytes are available, and then read at least that many bytes at one time and append them to the end of your buffer), and then parse as many complete messages are in the buffer before then reading more raw data from the socket again. The more data you can read at a time, the faster you can receive data.
To help speed up the code even more, don't process the messages inside the loop directly, either. Do the processing in another thread instead. Have the reading loop put complete messages in a queue and go back to reading, and then have a processing thread pull from the queue whenever messages are available for processing.

how to stop read() function?

I am writing an UDP client/server programs in C on linux. The thing is that I must add connection check feature. in linux man pages i found example of udp client with this feature( http://www.kernel.org/doc/man-pages/online/pages/man3/getaddrinfo.3.html ). It uses write/read functions to check server response. I tried to apply it to my program, and it looks like this:
char *test = "test";
nlen = strlen(test) + 1;
if (write(sock, test, nlen) != nlen) { // socket with the connection
fprintf(stderr, "partial/failed write\n");
exit(EXIT_FAILURE);
}
nread = read(sock, buf, nlen);
if (nread == -1) {
perror("Connection");
exit(EXIT_FAILURE);
}
the problem is than than i execute program with this bit when server is on, it does nothing except reading input infinite time which it doesn't use too. I tried kill(getpid(), SIGHUP) interrupt, which i found other similar topic here, and shutdown(sock, 1), but nothing seems to work. Could you please tell me a way out how to stop the input read or any other possible option to check if udp server is active?
You should use asynchronous reading.
For this you must use the select() function from "sys/socket.h".
The function monitors the socket descriptor for changes without blocking.
You can find the reference for it here or here or by typing "man select" in your console.
There is an example here for Windows, but its usage is very similar to UNIX.

epoll_wait() receives socket closed twice (read()/recv() returns 0)

We have an application that uses epoll to listen and process http-connections. Sometimes epoll_wait() receives close event on fd twice in a "row". Meaning: epoll_wait() returns connection fd on which read()/recv() returns 0. This is a problem, since I have malloc:ed pointer saved in epoll_event struct (struct epoll_event.data.ptr) and which is freed when fd(socket) is detected as closed the first time. Second time it crashes.
This problem occurs very rarely in real use (except one site, which actually has around 500-1000 users per server). I can replicate the problem using http siege with >1000 simultaneous connections per second. In this case application segfaults (because of invalid pointer) very randomly, sometimes after few seconds, usually after tens of minutes. I have been able to replicate the problem with fewer connections per second, but for that I have to run the application a long time, many days, even weeks.
All new accept() connection fd:s are set as non-blocking and added to epoll as one-shot, edge-triggering and waiting for read() to be available. So somewhy when the server load is high, epoll thinks that my application didn't get the close-event and queues new one?
epoll_wait() is running in it's own thread and queues fd events to be handled elsewhere. I noticed that there was multiple closes incoming with simple code that checks if there comes event twice in a row from epoll to same fd. It did happen and the events where both closes (recv(.., MSG_PEEK) told this to me :)).
epoll fd is created: epoll_create(1024);
epoll_wait() is run as follows: epoll_wait(epoll_fd, events, 256, 300);
new fd is set as non-blocking after accept():
int flags = fcntl(fd, F_GETFL, 0);
err = fcntl(fd, F_SETFL, flags | O_NONBLOCK);
new fd is added to epoll (client is malloc:ed struct pointer):
static struct epoll_event ev;
ev.events = EPOLLIN | EPOLLONESHOT | EPOLLET;
ev.data.ptr = client;
err = epoll_ctl(epoll_fd, EPOLL_CTL_ADD, client->fd, &ev);
And after receiving and handling data from fd, it is re-armed (of course since EPOLLONESHOT). At first I wasn't using edge-triggering and non-blocking io, but I tested it and got a nice perfomance boost using those. This problem existed before adding them though. Btw. shutdown(fd, SHUT_RDWR) is used on other threads to trigger proper close event to be received trough epoll when the server needs to close the fd because of some http-error etc (I don't actually know if this is the right way to do it, but it has worked perfectly).
As soon as the first read() returns 0, this means that the connection was closed by the peer. Why does the kernel generate a EPOLLIN event for this case? Well, there's no other way to indicate the socket's closure when you're only subscribed to EPOLLIN. You can add EPOLLRDHUP which is basically the same as checking for read() returning 0. However, make sure to test for this flag before you test for EPOLLIN.
if (flag & EPOLLRDHUP) {
/* Connection was closed. */
deleteConnectionData(...);
close(fd); /* Will unregister yourself from epoll. */
return;
}
if (flag & EPOLLIN) {
readData(...);
}
if (flag & EPOLLOUT) {
writeData(...);
}
The way I've ordered these blocks is relevant and the return for EPOLLRDHUP is important too, because it is likely that deleteConnectionData() may have destroyed internal structures. As EPOLLIN is set as well in case of a closure, this could lead to some problems. Ignoring EPOLLIN is safe because it won't yield any data anyway. Same for EPOLLOUT as it's never sent in conjunction with EPOLLRDHUP!
epoll_wait() is running in it's own thread and queues fd events to be handled elsewhere.
... So why when the server load is high, epoll thinks that my application didn't get the close-event and queues new one?
Assuming that EPOLLONESHOT is bug free (I haven't searched for associated bugs though), the fact that you are processing your epoll events in another thread and that it crashes sporadically or under heavy load may mean that there is a race condition somewhere in your application.
May be the object pointed to by epoll_event.data.ptr gets deallocated prematurely before the epoll event is unregistered in another thread when you server does an active close of the client connection.
My first try would be to run it under valgrind and see if it reports any errors.
I would re-check myself against the following sections from epoll(7):
Q6Will closing a file descriptor cause it to be removed from all epoll sets automatically?
and
o If using an event cache...
There're some good points there.
Removing EPOLLONESHOT made the problem disappear after few other changes. Unfortunately I'm not totally sure what caused it. Using EPOLLONESHOT with threads and adding the fd again manually into the epoll queue was quite certainly the problem. Also the data pointer in epoll struct is released after a delay. Works perfectly now.
Register Signal 0x2000 for Remote host closed connection
ex ev.events = EPOLLIN | EPOLLONESHOT | EPOLLET | 0x2000
and check if (flag & 0x2000) for Remote Host Close Connection