Server sends Welcome message more than once using select() - select

I'm currently having an issue with pop3 server which is based on select() function. Basically server holds multiple clients at once, but Welcome message sends as many times as is the number of connected client.
This is an example of messages sent to clients.
//file descriptor, array of clients
fd_set readset;
int sock_arr[30];
int max_fd, rc;
servsock = socket(AF_INET, SOCK_STREAM, 0);
/*...*/
max_fd = servsock;
do
{
FD_ZERO(&readset);
FD_SET(servsock, &readset);
for (int i = 0; i < 30; i++) {
rc = sock_arr[i];
if (rc > 0)
FD_SET(rc, &readset);
if (rc > max_fd)
max_fd = rc;
}
activity = select(max_fd + 1, &readset, NULL, NULL, &timeout);
if (activity < 0)
{
perror(" select() failed");
break;
}
if (activity == 0)
{
printf(" select() timed out. End program.\n");
break;
}
Message is sent as many times as is the number of connected client e.g.
if first client is connected the message is sent once
if second client is connected the message is sent twice etc.
//here server accepts new connections
if (FD_ISSET(servsock, &readset)) {
serv_socket_len = sizeof(addr);
peersoc = accept(servsock,(struct sockaddr *) &addr, &serv_socket_len);
if (peersoc < 0) {
error("Accept failed!\n", ERR_SCK);
}
else {
char message[256];
strcat(message, reply_code[1]);
strcat(message, reply_code[3]);
strcat(message, reply_code[0]);
//Welcome message
send(peersoc, message, strlen(message), 0);
for (int i = 0; i < 30; i++) {
if (sock_arr[i] == 0) {
sock_arr[i] = peersoc;
break;
}
}
}
}
//server processing input messages from clients using threads
/*...*/
I have no idea what causes I assume something with file descriptors. Please give me some advice if possible.

Solved I have forgotten to clear buffer for sending message
...
char message[256];
memset(message, 0, sizeof(message));
...

Related

How to deal with SOCKET in select method?

I saw this example in IBM docs on how to use select method for a server program, I would like to something similar to that on windows without using vectors and unordered_map but the problem I am facing is that windows uses SOCKET for socket descriptors and linux uses int although I can cast the windows socket into an integer it is not recommended and the socket value is bigger than FD_SETSIZE, that being said the for loop ends before reaching the server socket descriptor and the function ends up being kind of useless
int Server::handler()
{
int iResult;
timeval timeout;
fd_set activeFdSet;
fd_set readFdSet;
FD_ZERO(&activeFdSet);
FD_SET(serv_sock, &activeFdSet);
printf("FD_SETSIZE=%d\n", FD_SETSIZE);
// listen for incoming connections
iResult = listen(serv_sock, SOMAXCONN);
timeout.tv_sec = 5;
timeout.tv_usec= 0;
while (1)
{
readFdSet = activeFdSet;
printf("\tCopied activefdset to readfdset\n");
int res = select(FD_SETSIZE, &readFdSet, NULL,NULL,&timeout);
for (int i=0;i<FD_SETSIZE; i++)
{
if (FD_ISSET(i , &readFdSet)) // check socket descriptor
{
if (i == (int)serv_sock)
{
// accept connections to the server
}
else // client socket
{
// receive from client
}
}
}
}
return 0;
}
what is the best way to deal with the server socket in a for loop without using vectors or any other similar concepts
On non-Windows platforms, sockets are represented with file descriptors, which are basically indexes into a files table. That is why you can use int sockets as loop counters.
However, that is not the case with Windows sockets. A SOCKET is an opaque handle to an actual kernel object, so you can't use SOCKETs as loop counters, like you are trying to do. And do not cast them to int.
You really have no choice but to store the accepted sockets in an array or other container and then iterate through that instead, especially if you want the code to be portable across platforms, and particularly if you want to handle more than FD_SETSIZE number of clients (in which case, you should be using (e)poll() or other asynchronous socket I/O mechanism instead of select()), eg:
int Server::handler()
{
int iResult;
timeval timeout;
fd_set readFdSet;
int maxFd;
// listen for incoming connections
iResult = listen(serv_sock, SOMAXCONN);
timeout.tv_sec = 5;
timeout.tv_usec= 0;
while (1)
{
FD_ZERO(&readFdSet);
FD_SET(serv_sock, &readFdSet);
#ifdef WIN32
maxFd = -1; // not used on Windows
#else
maxFd = serv_sock;
#endif
for (each client_sock in list)
{
FD_SET(client_sock, &readFdSet);
#ifndef WIN32
if (client_sock > maxFd) maxFd = client_sock;
#endif
}
#endif
int res = select(maxFd+1, &readFdSet, NULL, NULL, &timeout);
if (res < 0) ... // error handling as needed...
if (FD_ISSET(serv_sock, &readFdSet))
{
// accept connections to the server, add to clients list
}
for (each client_sock in list)
{
if (FD_ISSET(client_sock, &readFdSet)) // check socket descriptor
{
// receive from client
}
}
}
return 0;
}
That being said, on Windows only, you can rely on Microsoft's documented implementation detail that fd_set has fd_count and fd_array[] members, so you can just iterate through the fd_set's internal array directly, eg:
int Server::handler()
{
int iResult;
timeval timeout;
fd_set readFdSet;
int maxFd;
// listen for incoming connections
iResult = listen(serv_sock, SOMAXCONN);
timeout.tv_sec = 5;
timeout.tv_usec= 0;
while (1)
{
FD_ZERO(&readFdSet);
FD_SET(serv_sock, &readFdSet);
#ifdef WIN32
maxFd = -1; // not used on Windows
#else
maxFd = serv_sock;
#endif
for (each client_sock in list)
{
FD_SET(client_sock, &readFdSet);
#ifndef WIN32
if (client_sock > maxFd) maxFd = client_sock;
#endif
}
#endif
int res = select(maxFd+1, &readFdSet, NULL, NULL, &timeout);
if (res < 0) ... // error handling as needed...
#ifdef WIN32
for (int i = 0; i < readFdSet.fd_count; ++i)
#else
for (int client_sock = 0; client_sock <= maxFd; ++client_sock)
#endif
{
#ifdef WIN32
SOCKET client_sock = readFdSet.fd_array[i];
#else
if (!FD_ISSET(client_sock, &readFdSet)) // check socket descriptor
continue;
#endif
if (client_sock == serv_sock)
{
// accept connections to the server, add to clients list
}
else // client socket
{
// receive from client
}
}
}
return 0;
}

What buffer collects the data sent through TCP sockets on localhost?

I have a client and server connected through TCP sockets on localhost.
I check with getsockopt that the server's SO_SNDBUF is small and the client's SO_RCVBUF is small (in my case both are 64KB)
I send twenty 500KB buffers from the server to the client, but in the client I've added a sleep for 500ms after each recv and I've capped the client receive buffer to 1MB.
What I observe is that the server very quickly rids itself of the 10MB of data which then arrives at the client in the next several seconds. 7-8MB are consistently in the "ether" in my experiments.
My question is what is this "ether"? It's obviously some buffer somewhere but can one tell which buffer it is?
Here is my test program.
#include <sys/socket.h>
#include <arpa/inet.h>
#include <unistd.h>
#include <thread>
#include <cstdio>
#include <vector>
#include <cstdlib>
#define PROXY 0
static std::vector<uint8_t> getRandomBuf() {
std::vector<uint8_t> buf;
buf.reserve(500 * 1024);
for (size_t i = 0; i < buf.capacity(); ++i) buf.push_back(rand() % 256);
return buf;
}
int server() {
auto sd = socket(AF_INET, SOCK_STREAM, 0);
if (sd < 0) return puts("socket fail");
sockaddr_in srv = {};
srv.sin_family = AF_INET;
srv.sin_addr.s_addr = INADDR_ANY;
srv.sin_port = htons(7654);
int enable = 1;
if (setsockopt(sd, SOL_SOCKET, SO_REUSEADDR, &enable, sizeof(int)) < 0) {
return puts("setsockopt fail");
}
if (bind(sd, (sockaddr*)&srv, sizeof(srv)) < 0) {
return puts("bind fail");
}
listen(sd, 3);
puts("listening...");
sockaddr_in client;
socklen_t csz = sizeof(client);
auto sock = accept(sd, (sockaddr*)&client, &csz);
if (sock < 0) return puts("accept fail");
{
int data;
socklen_t size = sizeof(data);
getsockopt(sock, SOL_SOCKET, SO_SNDBUF, &data, &size);
printf("accepted: %d\n", int(data));
}
for (int i=0; i<20; ++i) {
auto buf = getRandomBuf();
puts("Server sending blob");
send(sock, buf.data(), buf.size(), 0);
puts(" Server completed send of blob");
}
while (true) std::this_thread::yield();
return close(sock);
}
int client() {
int sd = socket(AF_INET, SOCK_STREAM, 0);
if (sd < 0) return puts("socket fail");
sockaddr_in client = {};
client.sin_family = AF_INET;
client.sin_addr.s_addr = inet_addr("127.0.0.1");
#if PROXY
client.sin_port = htons(9654);
#else
client.sin_port = htons(7654);
#endif
if (connect(sd, (sockaddr*)&client, sizeof(client)) < 0) {
return puts("connect fail");
}
{
int data;
socklen_t size = sizeof(data);
getsockopt(sd, SOL_SOCKET, SO_RCVBUF, &data, &size);
printf("connected: %d\n", int(data));
}
std::vector<uint8_t> buf(1024*1024);
while (true) {
auto s = recv(sd, buf.data(), buf.size(), 0);
if (s <= 0) {
puts("recv fail");
break;
}
printf("Client received %.1f KB\n", double(s)/1024);
#if !PROXY
std::this_thread::sleep_for(std::chrono::milliseconds(500));
#endif
}
return close(sd);
}
int main() {
std::thread srv(server);
std::this_thread::sleep_for(std::chrono::milliseconds(300)); // give time for the server to start
client();
srv.join();
return 0;
}
Note that in the test program there is a #define PROXY 0.
In another experiment with PROXY set to 1, I ditch the sleep and instead connect the client to a throttling proxy (Charles) and throttle the bandwidth to 400KB/s. In this case the server rids itself of the 10MB almost immediately and they arrive in course of ~20 seconds on the client. I assume that the proxy is buffering, though I don't see a configuration in this particular one for the buffer size.
This is all done hunting for another (likely bufferbloat) issue in which the server sends 10MB with 20 packets from Denver to Amsterdam over an Internet connection which does indeed have a 400KB/s bandwidth. In this case the server, much like the throttling proxy example from above, rids itself of the 10MB almost immediately, and they arrive over the next 20 seconds on the client, leading to 20 second delays for any subsequent messages. Had they not left the server, I would've been able to reorder the packets and send higher-priority ones in-between the ones from the 10MB blob, and not have the client suffer a 20 second delay due to network clog.

the unp book single-threaded server with select

In the book "UNIX Network Prgramming" 3rd, Vol 1, Section 6.8 "TCP Echo Server (Revisited)" of Chapter 6 "I/O multiplexing: The select and poll Functions", the book writes:
"Unfortunately, there is a problem with the server that we just showed. Consider what happens if a malicious client connects to the server, sends one byte of data(other than a newline), and then goes to sleep. The server will call read, which will read the single byte of data from the client and then block in the next call to read, waiting for more data from this client. The server is then blocked('hung' may be a better term)" by this one client and will not service any other clients (either new client connection or existing clients' data) until the malicious client either sends a newline or terminates."
However, I doubt that it is not the case the book described. If the "malicious" client is asleep when the second time the select() function get called, the corresponding socket descriptor will not in the ready-for-reading state, so that the read() function never gets the opportunity to block the single-threaded server. To verify this, I run the sample server and a "malicious" client only to find that the server is not blocked and corresponding to other clients normally.
I admit that when combining with I/O multiplexing calls such as select() or epoll(), it is recommended to use nonblocking I/O. But my question is, is there something wrong with the book's conclusion? Or there are conditions that may happen in real applications but not this simple examples? Or there's something wrong with my code? Thank you very much!
the sample server code(tcpservselect01.c):
#include "unp.h"
int
main(int argc, char **argv)
{
int i, maxi, maxfd, listenfd, connfd, sockfd;
int nready, client[FD_SETSIZE];
ssize_t n;
fd_set rset, allset;
char buf[MAXLINE];
socklen_t clilen;
struct sockaddr_in cliaddr, servaddr;
listenfd = Socket(AF_INET, SOCK_STREAM, 0);
bzero(&servaddr, sizeof(servaddr));
servaddr.sin_family = AF_INET;
servaddr.sin_addr.s_addr = htonl(INADDR_ANY);
servaddr.sin_port = htons(SERV_PORT);
Bind(listenfd, (SA *) &servaddr, sizeof(servaddr));
Listen(listenfd, LISTENQ);
maxfd = listenfd; /* initialize */
maxi = -1; /* index into client[] array */
for (i = 0; i < FD_SETSIZE; i++)
client[i] = -1; /* -1 indicates available entry */
FD_ZERO(&allset);
FD_SET(listenfd, &allset);
for ( ; ; ) {
rset = allset; /* structure assignment */
nready = Select(maxfd+1, &rset, NULL, NULL, NULL);
if (FD_ISSET(listenfd, &rset)) {/* new client connection */
clilen = sizeof(cliaddr);
connfd = Accept(listenfd, (SA *) &cliaddr, &clilen);
for (i = 0; i < FD_SETSIZE; i++)
if (client[i] < 0) {
client[i] = connfd; /* save descriptor */
break;
}
if (i == FD_SETSIZE)
err_quit("too many clients");
FD_SET(connfd, &allset);/* add new descriptor to set */
if (connfd > maxfd)
maxfd = connfd; /* for select */
if (i > maxi)
maxi = i; /* max index in client[] array */
if (--nready <= 0)
continue; /* no more readable descriptors */
}
for (i = 0; i <= maxi; i++) {/* check all clients for data */
if ( (sockfd = client[i]) < 0)
continue;
if (FD_ISSET(sockfd, &rset)) {
if ( (n = Read(sockfd, buf, MAXLINE)) == 0) {
/*4connection closed by client */
Close(sockfd);
FD_CLR(sockfd, &allset);
client[i] = -1;
} else
Writen(sockfd, buf, n);
if (--nready <= 0)
break; /* no more readable descriptors */
}
}
}
}
the "malicious" client code
#include "unp.h"
void
sig_pipe(int signo)
{
printf("SIGPIPE received\n");
return;
}
int
main(int argc, char **argv)
{
int sockfd;
struct sockaddr_in servaddr;
if (argc != 2)
err_quit("usage: tcpcli <IPaddress>");
sockfd = Socket(AF_INET, SOCK_STREAM, 0);
bzero(&servaddr, sizeof(servaddr));
servaddr.sin_family = AF_INET;
servaddr.sin_port = htons(9877);
Inet_pton(AF_INET, argv[1], &servaddr.sin_addr);
Signal(SIGPIPE, sig_pipe);
Connect(sockfd, (SA *) &servaddr, sizeof(servaddr));
Write(sockfd, "h", 1);
printf("go to sleep 20s\n");
sleep(20);
printf("wake up\n");
printf("go to sleep 20s\n");
Write(sockfd, "e", 1);
sleep(20);
printf("wake up\n");
exit(0);
}
I agree with you. The book's conclusion about DOS is wrong. First of all the book's sample server code didn't assume that the input data should consist of N bytes or end with a newline, so one-byte input without a following newline shouldn't do any harm to the server.
Google books link to the relevant page

using select() to detect connection close

As described in other posts, I'm trying to use select() in socket programming to detect closed connections. See the following code which tries to detect closed connections by select() and a following check on whether recv() returns 0. Before the while loop starts, there are two established TCP connections already. In our controlled experiment, the first connection always closes after about 15 seconds and the second about 30 seconds.
Theoretically (as described by others), when they get closed, select() should return (twice in our case) which allows us to detect both close events. The problem we face is that the select() now only returns once and never again, which allows us to detect ONLY the first connection close event. If the code for one IP it works fine but not for two or more connections.
Anyone has any ideas or suggestions? Thanks.
while (1)
{
printf("Waiting on select()...\n");
if ((result = select(max + 1, & readset, NULL, NULL, NULL)) < 0)
{
printf("select() failed");
break;
}
if (result > 0)
{
i=0;
while(i<max+1)
{
if (FD_ISSET(i, &readset))
{
result = recv(i, buffer, sizeof(buffer), 0);
if (result == 0)
{
close(i);
FD_CLR(i, &readset);
if (i == max)
{
max -= 1;
}
}
}
i++;
}
}
}
select() modifies readset to remove socket(s) that are not readable. Every time you call select(), you have to reset and fill readset with your latest list of active sockets that you want to test, eg:
fd_set readset;
int max;
while (1)
{
FD_ZERO(&readset);
max = -1;
// populate readset from list of active sockets...
// set max according...
printf("Waiting on select()...\n");
result = select(max + 1, &readset, NULL, NULL, NULL);
if (result < 0)
{
printf("select() failed");
break;
}
if (result == 0)
continue;
for (int i = 0; i <= max; ++i)
{
if (FD_ISSET(i, &readset))
{
result = recv(i, buffer, sizeof(buffer), 0);
if (result <= 0)
{
close(i);
// remove i from list of active sockets...
}
}
}
}

Select c: wfds is always turned on, causing block

For some reason FD_ISSET always returns true for &wfds, even when there is nothing to send. Here is the code snippet (same on both client and server). Both client and server get same issue with select saying wfds is on. Shouldn't it only activate when i type a message on my keyboard and press enter?
while (1) {
//trying select..
tv.tv_sec = 29;
tv.tv_usec = 500000;
FD_ZERO(&rfds);
FD_ZERO(&wfds);
FD_SET(new_sockfd, &rfds);
FD_SET(new_sockfd, &wfds);
n = select(new_sockfd + 1, &rfds, &wfds, NULL, &tv);
if (n > 0) {
if (FD_ISSET(new_sockfd, &rfds)) {
while (1) {
if ((num = recv(new_sockfd, buffer, 10240, 0)) == -1) {
//fprintf(stderr,"Error in receiving message!!\n");
perror("recv");
exit(1);
} else if (num == 0) {
printf("Connection closed\n");
return 0;
}
buffer[num] = '\0';
printf("Message received: %s\n", buffer);
break;
}
}
//this always returns true on client and host
if (FD_ISSET(new_sockfd, &wfds)) {
while (1) {
fgets(buffer, MAXDATASIZE - 1, stdin);
if ((send(new_sockfd, buffer, strlen(buffer), 0)) == -1) {
fprintf(stderr, "Failure Sending Message\n");
close(new_sockfd);
exit(1);
} else {
printf("Message being sent: %s\n", buffer);
break;
}
}
}
}
}
You probably misunderstood how writefds parameter works for select().
You should set the flag in writefds for your file descriptor before calling select() if and only if you have something to send.
Then select() returns with the flag left set in writefds, when the socket has enough space in the socket buffers to accept data for sending. Then you check for that flag, and realize that the socket is available for sending, and you also know that you have something to send, since originally it was you, who set the flag before calling select(). Therefore you can proceed with sending the data over the socket. Then, if you have sent all data you have, and your to-be-sent buffers are empty, you keep the flag for writefds cleared when next time you call select().