how to detect disconnection in IOCP server using zero byte recv - sockets

I'm currently implementing IOCP server for a game and I'm trying zero bytes recv technic.
I have 3 questions.
If you know disconnection from the client by checking if bytestransferred is 0, then how do you distinguish between receive completion and disconnection?
I'm not performing non-block mode recv() when I process actual receive process because clients send the bytes of actual data first, so I know how many bytes I'm receiving. Do I need to use non-block mode recv() still?
I'm doing it like so.
InputMemoryBitStream incomming;
std::string data;
uint32_t strLen = 0;
recv(socket, reinterpret_cast<char*>(&strLen), sizeof(uint32_t), 0);
incomming.resize(strLen);
recv(socket, reinterpret_cast<char*>(incomming.getBufferPtr()), strLen, 0);
incomming.read(data, strLen);
(InputMemoryBitStream is for reading compressed data.)
I'm dynamically allocating per io data every time I do WSARecv() and WSASend() and I free them as soon as I finish processing completed I/Os. Is it inefficient to do that? or is it acceptable? Should I reuse per io data maybe?
Thank you in advance.

Related

How much data does recv() return from a socket after blocking? [duplicate]

The recv() library function man page mention that:
It returns the number of bytes received. It normally returns any data available, up to the requested amount, rather than waiting for receipt of the full amount requested.
If we are using blocking recv() call and requested for 100 bytes:
recv(sockDesc, buffer, size, 0); /* Where size is 100. */
and only 50 bytes are send by the server then this recv() is blocked until 100 bytes are available or it will return receiving 50 bytes.
The scenario could be that:
server crashes after sendign only 50 bytes
bad protocol design where server is only sending 50 bytes while client is expecting 100 and server is also waiting for client's reply (i.e. socket close connection has not been initiated by server in which recv will return)
I am interested on Linux / Solaris platform. I don't have the development environment to check it out myself.
recv will return when there is data in the internal buffers to return. It will not wait until there is 100 bytes if you request 100 bytes.
If you're sending 100 byte "messages", remember that TCP does not provide messages, it is just a stream. If you're dealing with application messages, you need to handle that at the application layer as TCP will not do it.
There are many, many conditions where a send() call of 100 bytes might not be read fully on the other end with only one recv call when calling recv(..., 100); here's just a few examples:
The sending TCP stack decided to bundle together 15 write calls, and the MTU happened to be 1460, which - depending on timing of the arrived data might cause the clients first 14 calls to fetch 100 bytes and the 15. call to fetch 60 bytes - the last 40 bytes will come the next time you call recv() . (But if you call recv with a buffer of 100 , you might get the last 40 bytes of the prior application "message" and the first 60 bytes of the next message)
The sender buffers are full, maybe the reader is slow, or the network is congested. At some point, data might get through and while emptying the buffers the last chunk of data wasn't a multiple of 100.
The receiver buffers are full, while your app recv() that data, the last chunk it pulls up is just partial since the whole 100 bytes of that message didn't fit the buffers.
Many of these scenarios are rather hard to test, especially on a lan where you might not have a lot of congestion or packet loss - things might differ as you ramp up and down the speed at which messages are sent/produced.
Anyway. If you want to read 100 bytes from a socket, use something like
int
readn(int f, void *av, int n)
{
char *a;
int m, t;
a = av;
t = 0;
while(t < n){
m = read(f, a+t, n-t);
if(m <= 0){
if(t == 0)
return m;
break;
}
t += m;
}
return t;
}
...
if(readn(mysocket,buffer,BUFFER_SZ) != BUFFER_SZ) {
//something really bad is going on.
}
The behavior is determined by two things. The recv low water mark and whether or not you pass the MSG_WAITALL flag. If you pass this flag the call will block until the requested number of bytes are received, even if the server crashes. Other wise it returns as soon as at least SO_RCVLOWAT bytes are available in the socket's receive buffer.
SO_RCVLOWAT
Sets the minimum number of bytes to
process for socket input operations.
The default value for SO_RCVLOWAT is
1. If SO_RCVLOWAT is set to a larger value, blocking receive calls normally
wait until they have received the
smaller of the low water mark value or
the requested amount. (They may return
less than the low water mark if an
error occurs, a signal is caught, or
the type of data next in the receive
queue is different than that returned,
e.g. out of band data). This option
takes an int value. Note that not all
implementations allow this option to
be set.
If you read the quote precisely, the most common scenario is:
the socket is receiving data. That 100 bytes will take some time.
the recv() call is made.
If there are more than 0 bytes in the buffer, recv() returns what is available and does not wait.
While there are 0 bytes available it blocks and the granularity of the threading system determines how long that is.

Handle mutiple UDP client

I have a single server and multiple UDP clients in my setup and each sending a boolean string (true/false 5 byte max) at regular intervals of time (few milliseconds). Based on which i am determining if the device is alive or not.The number of clients are unknown before starting the program and using c++ as my programming language.
Is it possible to have a multi threading in UDP socket connection ? I came across and example for TCP where they create a new socket descriptor and spawn a thread. If its possible to write a multi threading for UDP serves please provide example/reference code.
During
while (true)
{
if ((numbytes = recvfrom(sockfd, buf, MAXBUFLEN-1 , 0,
(struct sockaddr *)&cient_addr, &client_addr_len)) == -1)
{
perror("recvfrom");
exit(1);
}
// Process buffer...
// Do something with buffer
}
In multiple client scenario if i receive message from one client its copied to local buffer of maximum buffer size. And while process this message if we receive one more message from another client where will this message store intermittently before reading to local buffer ? Does the socket file descriptor has its own buffer ?
What happens if both clients send message at the same time ? Only one is read to local buffer what will happen to another message ? will it wait for the next recvfrom ?
I read that if Maximum buffer is less that the message / packet size received then the recvfrom will only read the max buffer length and might through some error...Although all of my client only send 5 bytes maximum..If I assign Maximum bytes to 1024 bytes which is way far than expected what prices am I gonna pay ?
Thank you.

using socket package for octave on ubuntu

I am trying to use the sockets package for Octave on my Ubuntu. I am using the Java Sockets API for connecting to Octave. The Java program is the client, Octave is my server. I just tried your code example:
http://pauldreik.blogspot.de/2009/04/octave-sockets-example.html
There are two problems:
1.)
Using SOCK_STREAM, for some strange reason, certain bytes are being received by recv() right after accept(), even if I'm not sending anything from the client. Subsequent messages I send with Java have no effect, it seems the Octave socket completely has its own idea about what it thinks it receives, regardless of what I'm actually sending.
2.)
Using SOCK_DGRAM, there is another problem:
I do get a reception of my actual message this way, but it seems that a recv() command doesn't remove the first element from the datagram queue. Until I send the second datagram to the socket, any subsequent recv() commands will repeatedly read the first datagram as if it were still in the queue. So the recv() function doesn't even block to wait for an actually new available datagram. Instead, it simply reads the same old one again. This is useless, since I cannot tell my server to wait for news from the client.
Is this how UDP is supposed to behave? I thought datagram packets are really removed from the datagram queue by recv().
This is my server side code:
s=socket(AF_INET, SOCK_DGRAM, 0);
bind(s,12345);
[config,count] = recv(s, 10)
[test,count] = recv(s, 4)
And this is my Java client:
public LiveSeparationClient(String host, int port, byte channels, byte sampleSize, int sampleRate, int millisecondsPerFrame) throws UnknownHostException, IOException {
this.port = port;
socket = new DatagramSocket();
this.host = InetAddress.getByName(host);
DatagramPacket packet = new DatagramPacket(ByteBuffer.allocate(10)
.put(new byte[]{channels, sampleSize})
.putInt(sampleRate)
.putInt(millisecondsPerFrame)
.array(), 10, this.host, port
);
socket.send(packet);
samplesPerFrame = (int) Math.floor((double)millisecondsPerFrame / 1000.0 * (double)sampleRate);
}
As you see, I'm sending 10 Bytes and receiving all 10 (this works so far) with recv(s, 10). In the later part of my Java program, packets will be generated and send also, but this may take some seconds. In the mean time, the second receive, recv(s, 4), in Octave should wait for a really new datagram package. But this doesn't happen, is simply reads the first 4 Bytes of the same old package again. recv() doesn't block the second time.
I hope it is not a problem for you to fix this?
Thanks in advance :-)
Marvin
P.S.: Also, I don't undertstand why listen() and accept() are both necessary when using SOCK_STREAM, but not for SOCK_DGRAM.

recv() returns 0

I have a very annoying problem that I found several times on other forums but I can't find a proper solution.
The problem is recv() returns 0 on the last few bytes of a connection. Here are some background information.
Both (client / server) applications run on the same machine.
Both (client / server) sockets are non-blocking
The transfered data size is 53 bytes.
Both (client / server) call shutdown and closesocket when the last send()/recv() was executed.
I also tried with SO_LINGER and 10 seconds, no success either
I call send() several times (small chunks) and from the client side 53 bytes are transfered.
The server calls recv() several times (4 byte requests) and read 49 bytes and then returns 0 (54 Bytes - 49 Bytes, so 4 bytes are missing).
MSDN and some forums write for non-blocking sockets:
recv() definitely returns < 0 on error and errno / WSAGetLastError is set
recv() definitely returns = 0 when the other side closed the connection
recv() definitely returns > 0 when data was read
MSDN also says:
Using the closesocket or shutdown functions with SD_SEND or SD_BOTH
results in a RELEASE signal being sent out on the control channel. Due
to ATM's use of separate signal and data channels, it is possible that
a RELEASE signal could reach the remote end before the last of the
data reaches its destination, resulting in a loss of that data. One
possible solutions is programming a sufficient delay between the last
data sent and the closesocket or shutdown function calls for an ATM
socket.
This is regarded in the example of recv() and send(): http://msdn.microsoft.com/en-us/library/windows/desktop/ms740121(v=vs.85).aspx
But still no success, I still get some interrupts in 10% of all connections after the 49 Byte is received, 90% of the connections succeed. Any ideas? Thx.
recv() returns 0 only when you request a 0-byte buffer or the other peer has gracefully disconnected. If you are not receiving all of the data you are expecting, then you are not reading the data correctly to begin with. Please update your question with your actual code.
My guess is that you're not really sending all the data you think your are sending. Check out:
The Ultimate SO_LINGER page
recv() definitely returns = 0 when the other side closed the connection
This is not completely true, in the following code using non-blocking winsock2 tcp, when no data is available, select returns 1 and recv returns 0, as does WSAGetLastError().
fd_set test = {1, socket};
const timeval timeout = {0, 0};
if (!::select(0, &test, nullptr, nullptr, &timeout)) return 0;
int done = ::recv(socket, buffer, 1, 0);
This continues even after the other end has called:
::shutdown(socket, SD_BOTH);
::closesocket(socket);
and then ended. Communication works as expected, it is just ::recv that seems to be "broken".

TCP socket question

I starts learning TCP protocol from internet and having some experiments. After I read an article from http://www.diffen.com/difference/TCP_vs_UDP
"TCP is more reliable since it manages message acknowledgment and retransmissions in case of lost parts. Thus there is absolutely no missing data."
Then I do my experiment, I write a block of code with TCP socket:
while( ! EOF (file))
{
data = read_from(file, 5KB); //read 5KB from file
write(data, socket); //write data to socket to send
}
I think it's good because "TCP is reliable" and it "retransmissions lost parts"... But it's not good at all. A small file is OK but when it comes to about 2MB, sometimes it's OK but not always...
Now, I try another one:
while( ! EOF (file))
{
wait_for_ACK();//or sleep 5 seconds
data = read_from(file, 5KB); //read 5KB from file
write(data, socket); //write data to socket to send
}
It's good now...
All I can think of is that the 1st one fails because of:
1. buffer overflow on sender because the sending rate is slower than the writing rate of the program (the sending rate is controlled by TCP)
2. Maybe the sending rate is greater than writing rate but some packets are lost (after some retransmission, still fails and then TCP gives up...)
Any ideas?
Thanks.
TCP will ensure that you don't lose data but you should check how many bytes actually got accepted for transmission... the typical loop is
while (size > 0)
{
int sz = send(socket, bufptr, size, 0);
if (sz == -1) ... whoops, error ...
size -= sz; bufptr += sz;
}
when the send call accepts some data from your program then it's a job of the OS to get that to destination (including retransmission), but the buffer for sending may be smaller than the size you need to send, and that's why the resulting sz (number of bytes accepted for transmission) may be less than size.
It's also important to consider that sending is asynchronous, i.e. after the send function returns the data is not already at the destination, it's has been only assigned to the TCP transport system to be delivered. If you want to know when it will be received then you'll have to use other systems (e.g. a reply message from your counterpart).
You have to check write(socket) to make sure it writes what you ask.
Loop until you've sent everything or you've calculated a time out.
Do not use indefinite timeouts on socket read/write. You're asking for trouble if you do, especially on Windows.