I've got an event-driven network server program. This program accepts connections from other processes on other hosts. There may be many short-lived connections from different ports on the same remote IP.
Currently, I've got a while(1) loop which calls accept() and then spawns a thread to process the new connection. Each connection is closed after the message is read. On the remote end, the connection is closed after a message is sent.
I want to eliminate the overhead of setting up and tearing down connections by caching the open socket FDs. On the sender side, this is easy - I just don't close the connections, and keep them around.
On the receiver side, it's a bit harder. I know I can store the FD returned by accept() in a structure and listen for messages across all such sockets using poll() or select(), but I want to simultaneously both listen for new connections via accept() and listen on all the cached connections.
If I use two threads, one on poll() and one on accept(), then when the accept() call returns (a new connection is opened), I have to wake up the other thread waiting on the old set of connections. I know I can do this with a signal and pselect(), but this whole mess seems like way too much work for something so simple.
Is there a call or superior methodology that will let me simultaneously handle new connections being opened and data being sent on old connections?
Last time I checked, you could just listen on a socket and then select or poll to see if a connection came in. If so, accept it; it will not block (but you may want to really should set O_NONBLOCK just to be sure)
you could use listen then use select or poll then accept
if (listen(socket_fd, Number_connection) < 0 )
{
perror("listen");
return 1;
}
fd_set set;
struct timeval timeout;
int rv;
FD_ZERO(&set); /* clear the set */
FD_SET(socket_fd, &set); /* add our file descriptor to the set */
timeout.tv_sec = 20;
timeout.tv_usec = 0;
rv = select(socket_fd + 1, &set, NULL, NULL, &timeout);
if (rv == -1)
{
perror("select"); /* an error occurred */
return 1;
}
else if (rv == 0)
{
printf("timeout occurred (20 second) \n"); /* a timeout occurred */
return 1;
}
else
{
client_socket_fd = accept (socket_fd,(struct sockaddr *) &client_name, &client_name_len);
}
I'd put a listener in separate process(thread) not to mess things up. And run a worker process on another to handle existing sockets. There's no need for non-blocking listener really. And no thread overhead running 2 threads.
It should work like that: you accept on your listener thread till it returns you a descriptor of client socket and pass it to worker which is doing all dirty read/write job on it.
If you want to listen several ports and don't want to hold one process per listener I suggest you set your socket in O_NONBLOCK and do someth like:
// loop through listeners here and poll'em for read
// when read is successful call accept, get descriptor,
// pass it to worker and continue listen
while(1){
foreach( serverSocket in ServerSockets ){
if( serverSocket.Poll( 10, SelectRead ) ){
clientSocket = serverSocket.Accept();
// pass to worker here and release
}
}
}
Related
I am facing this problem of Binding to socket.
1st instance works properly i.e.
socket() returns success and hence forth bind() and listen(), accept() and hence recv() - All fine till here.
2nd instance throw error while binding "Address already in use"
I went through all the post earlier on this and i dont see any specific solution provided on the same.
My code is as below :-
if((status = getaddrinfo(NULL,"8080",&hints,&servinfo))!=0){
ALOGE("Socket:: getaddrinfo failed %s\n",strerror(errno));
return NULL;
}
server_sockfd = socket(servinfo->ai_family, servinfo->ai_socktype, servinfo->ai_protocol);
if(server_sockfd == -1) {
ALOGE("Socket:: Scoket System Call failed %s\n",strerror(errno));
return NULL;
}
if ((setsockopt(server_sockfd, SOL_SOCKET, SO_REUSEADDR, &opt, sizeof(int))) < 0)
{
ALOGE("Socket:: setsockopt failed %s\n",strerror(errno));
return NULL;
}
ret = bind(server_sockfd, servinfo->ai_addr,servinfo->ai_addrlen);
if(ret!=0) {
ALOGE("Socket:: Error Binding on socket %s\n",strerror(errno));
return NULL;
}
This code runs on android platform.
I have properly closed each session before opening a new session as below :-
ret = shutdown(client_sockfd,0);
if(ret != 0)
ALOGE("Socket:: Shutdown Called%s\n",strerror(errno));
I tried with close as well but it did not work.
Surprisingly the error does not disappear even when we try to open the socket after long time (as per TIME_WAIT logic)
Could anyone please guide me to proper call or API or Logic(in code and not on command line apart from directly killing the process) to handle this situation ?
A socket is one half a channel of communication between two computers over a network on a particular port. (the other half is the corresponding socket on the other computer)
Error is very clear I suppose in this case. As mentioned Address already in use, so the the socket you are trying to connect in the second attempt is already used (port was already occupied) -> maybe due to first socket connection.
To investigate further check another SO question here and here
You can't share a TCP listening port between two processes even with SO_REUSEADDR.
NB shutdown() does not close a TCP session. It half-closes it. You have to close the socket.
I have an implementation where I listen to a port for events and do processing based on the input. I have kept it in a infinite loop. However it only works once and then I have to restart the program again. Does control never come back. Is this infinite loop a good idea?
Integer port = Integer.parseInt(Configuration.getProperty("Environment", "PORT"));
ServerSocket serverSocket = new ServerSocket(port);
LOG.info("Process Server listening on PORT: " + port);
while(true){
Socket socket = serverSocket.accept();
new Thread(new ProcessEvent(socket)).start();
}
Once you started the thread that handle the client, you also need to loop on a read function, because after you read a message, you will need to read the next messages. The accept() will return only once per client connection. After the connection is opened, everything happen in the thread, until the connection is closed.
Looping on accept() is a good idea, but the spawned thread must not exit as long as your client is connected. If you intentionally close the connection, then it should be fine if you make sure it is handled correctly on both sides, and the client needs to reopen the connection for further communication.
I wrote a server code to run on my embedded platform...it listens to wifi clients and I have made provision to accept only one client connection at a time.
so I do,
sfd = socket(AF_INET, SOCK_STREAM, 0);
ret=bind(sfd,(struct sockaddr*)&serv_addr,sizeof(serv_addr));
ret = listen(sfd,5);
while(1)
{
new_fd = accept(sfd,(struct sockaddr*)&client_addr,&len);
....
close(new_fd);
}
So in this case what I observe that only one client can send data...which is expected
But, Another client can connect to the socket simultaneouly...although the data from 2nd client is not processed.
So is this because of the listen(5) backlog parameter. So that I can simultaneously connect to 5 connections although I may not process them.
Please help me clarify.
I am implementing a server in which i listen for the client to connect using the accept socket call.
After the accept happens and I receive the socket, i wait for around 10-15 seconds before making the first recv/send call.
The send calls to the client fails with errno = 32 i.e broken pipe.
Since i don't control the client, i have set socket option *SO_KEEPALIVE* in the accepted socket.
const int keepAlive = 1;
acceptsock = accept(sock, (struct sockaddr*)&client_addr, &client_addr_length)
if (setsockopt( acceptsock, SOL_SOCKET, SO_KEEPALIVE, &keepAlive, sizeof(keepAlive)) < 0 )
{
print(" SO_KEEPALIVE fails");
}
Could anyone please tell what may be going wrong here and how can we prevent the client socket from closing ?
NOTE
One thing that i want to add here is that if there is no time gap or less than 5 seconds between the accept and send/recv calls, the client server communication occurs as expected.
connect(2) and send(2) are two separate system calls the client makes. The first initiates TCP three-way handshake, the second actually queues application data for transmission.
On the server side though, you can start send(2)-ing data to the connected socket immediately after successful accept(2) (i.e. don't forget to check acceptsock against -1).
After the accept happens and I receive the socket, i wait for around 10-15 seconds before making the first recv/send call.
Why? Do you mean that the client takes that long to send the data? or that you just futz around in the server for 10-15s between accept() and recv(), and if so why?
The send calls to the client fails with errno = 32 i.e broken pipe.
So the client has closed the connection.
Since I don't control the client, i have set socket option SO_KEEPALIVE in the accepted socket.
That won't stop the client closing the connection.
Could anyone please tell what may be going wrong here
The client is closing the connection.
and how can we prevent the client socket from closing ?
You can't.
I am working with client-server programming I am referring this link and my server is successfully running.
I need to send data continuously to the server.
I don't want to connect() every time before sending each packet. So for first time I just created a socket and send the first packet, the rest of the data I just used write() function to write data to the socket.
But my problem is while sending data continuously if the server is not there or my Ethernet is disabled still it successfully write data to socket.
Is there any method by which I can create socket only at once and send data continuously with knowing server failure?.
The main reason for doing like this that, on the server side I am using GPRS modem and on each time when call connect() function for each packet the modem get hanged.
For creating socket I using below code
Gprs_sockfd = socket(AF_INET, SOCK_STREAM, 0);
if (Gprs_sockfd < 0)
{
Display("ERROR opening socket");
return 0;
}
server = gethostbyname((const char*)ip_address);
if (server == NULL)
{
Display("ERROR, no such host");
return 0;
}
bzero((char *) &serv_addr, sizeof(serv_addr));
serv_addr.sin_family = AF_INET;
bcopy((char *)server->h_addr,(char *)&serv_addr.sin_addr.s_addr,server->h_length);
serv_addr.sin_port = htons(portno);
if (connect(Gprs_sockfd,(struct sockaddr *) &serv_addr,sizeof(serv_addr)) < 0)
{
Display("ERROR connecting");
return 0;
}
And each time I writing to the socket using the below code
n = write(Gprs_sockfd,data,length);
if(n<0)
{
Display("ERROR writing to socket");
return 0;
}
Thanks in advance.............
TCP was designed to tolerate temporary failures. It does byte sequencing, acknowledgments, and, if necessary, retransmissions. All unacknowledged data is buffered inside the kernel network stack. If I remember correctly the default is three re-transmission attempts (somebody correct me if I'm wrong) with exponential back-off timeouts. That quickly adds up to dozens of seconds, if not minutes.
My suggestion would be to design application-level acknowledgments into your protocol, meaning the server would send a short reply saying that it received that much data up to now, say every second. If the client does not receive suck ack in say 3 seconds, the client knows the connection is unusable and can close it. By the way, this is easier done with non-blocking sockets and polling functions like select(2) or poll(2).
Edit 0:
I think this would be very relevant here - "The ultimate SO_LINGER page, or: why is my tcp not reliable".
Nikolai is correct here, the behaviour you experience here is desirable as basically you could continue transfering data after network outage without any logic in your application. If your application should detect outages longer that specified amount of time, you need to add heartbeating into your protocol. This is standard way of solving the problem. It can also allow you for detect situation when network is all right, receiver is alive, but it has deadlocked (due to to a software bug).
Heartbeating could be as simple as mentioned by Nikolai -- sending a small packet every X seconds; if the server can't see the packet for N*X seconds, the connection would be dropped.