I have a single server and multiple UDP clients in my setup and each sending a boolean string (true/false 5 byte max) at regular intervals of time (few milliseconds). Based on which i am determining if the device is alive or not.The number of clients are unknown before starting the program and using c++ as my programming language.
Is it possible to have a multi threading in UDP socket connection ? I came across and example for TCP where they create a new socket descriptor and spawn a thread. If its possible to write a multi threading for UDP serves please provide example/reference code.
During
while (true)
{
if ((numbytes = recvfrom(sockfd, buf, MAXBUFLEN-1 , 0,
(struct sockaddr *)&cient_addr, &client_addr_len)) == -1)
{
perror("recvfrom");
exit(1);
}
// Process buffer...
// Do something with buffer
}
In multiple client scenario if i receive message from one client its copied to local buffer of maximum buffer size. And while process this message if we receive one more message from another client where will this message store intermittently before reading to local buffer ? Does the socket file descriptor has its own buffer ?
What happens if both clients send message at the same time ? Only one is read to local buffer what will happen to another message ? will it wait for the next recvfrom ?
I read that if Maximum buffer is less that the message / packet size received then the recvfrom will only read the max buffer length and might through some error...Although all of my client only send 5 bytes maximum..If I assign Maximum bytes to 1024 bytes which is way far than expected what prices am I gonna pay ?
Thank you.
Related
What will happen if I will establish a connection between a client and a server, and configure a different buffer size for each of them.
This is my client's code:
import socket,sys
TCP_IP = sys.argv[1]
TCP_PORT = int(sys.argv[2])
BUFFER_SIZE = 1024
MESSAGE = "World! Hello, World!"
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((TCP_IP, TCP_PORT))
s.send(MESSAGE)
data = s.recv(BUFFER_SIZE)
s.close()
print "received data:", data
Server's code:
import socket,sys
TCP_IP = '0.0.0.0'
TCP_PORT = int(sys.argv[1])
BUFFER_SIZE = 5
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind((TCP_IP, TCP_PORT))
s.listen(1)
while True:
conn, addr = s.accept()
print 'New connection from:', addr
while True:
data = conn.recv(BUFFER_SIZE)
if not data: break
print "received:", data
conn.send(data.upper())
conn.close()
That means I will be limited to only 5 bytes? Which means I won't be able to receive the full packet and will lose 1024-5 packets?
I or does it mean I am able to get only packets of 5 bytes, which means that instead of receiving one packets of 1024 bytes as the client sent it, I'll have to divide 1024 by 5 and get 204.8 packets (?) which sounds not possible.
What in general is happing in that code?
Thanks.
Your arguments are based on the assumption that a single send should match a single recv. But this is not the case. TCP is a byte stream and not a message based protocol. This means all what matters are the transferred bytes. And for this is does not matter if it does not matter if one or 10 recv are needed to read 50 bytes.
Apart from that send is not guaranteed to send the full buffer either. It might only send parts of the buffer, i.e. the sender need actually check the return code to find out how much of the given buffer was actually send now and how much need to be retried for sending later.
And note that the underlying "packet" is again a different thing. If there is a send for 2000 bytes it will usually need multiple packets to be send (depending on the maximum transfer unit of the underlying data link layer). But this does not mean that one also need multiple recv. If all the 2000 bytes are already transferred to the OS level receive buffer at the recipient then they can be also be read at once, even if they traveled in multiple packets.
Your socket won't lose the remaining 1024 - 5 (1019) bytes.it just stored on the socket and ready to read again! so , all you need to do is to read from the socket again. the size of buffer you want to read to is decided by yourself. and you are not limited to 5 bytes, you are just limiting the read buffer for each single read to 5 bytes. so for 1024 bytes to read you have to read for 204 times plus another time read which would be the last one. but remember that the last time read fills your last buffer index with null. and that means there is no more bytes available for now.
I'm currently implementing IOCP server for a game and I'm trying zero bytes recv technic.
I have 3 questions.
If you know disconnection from the client by checking if bytestransferred is 0, then how do you distinguish between receive completion and disconnection?
I'm not performing non-block mode recv() when I process actual receive process because clients send the bytes of actual data first, so I know how many bytes I'm receiving. Do I need to use non-block mode recv() still?
I'm doing it like so.
InputMemoryBitStream incomming;
std::string data;
uint32_t strLen = 0;
recv(socket, reinterpret_cast<char*>(&strLen), sizeof(uint32_t), 0);
incomming.resize(strLen);
recv(socket, reinterpret_cast<char*>(incomming.getBufferPtr()), strLen, 0);
incomming.read(data, strLen);
(InputMemoryBitStream is for reading compressed data.)
I'm dynamically allocating per io data every time I do WSARecv() and WSASend() and I free them as soon as I finish processing completed I/Os. Is it inefficient to do that? or is it acceptable? Should I reuse per io data maybe?
Thank you in advance.
After studying the "window size" concept, what I understood is that it keeps packet before sending over wire and till acknowledgement come for earliest packet . Once this gets filled up, subsequent packet will be dropped. Somewhere I also have read that TCP is a streaming protocol, and packet is what related to IP protocol at Network layer .
What I assumed till was that I have declared a Buffer (inside code) which I fill with some data and send this Buffer using socket. I declared a buffer of 10000 bytes and send it repeatedly using socket over 10 Gbps link .
I have following assumptions and questions. Please verify and help
If I want to send a packet of 64,256,512 etc. bytes, declared buffer inside code of that much space and send over socket. Each execution of send() command will send one packet of that much size .
So if I want to study the packet size variation effect on throughput, what do I have to do? Do I need to vary buffer size in code?
What are the socket buffer which we set using SO_SNDBUF and SO_RECVBUF? Google says it's buffer space for socket. Is it same as TCP window size or something different? Which parameter is more suitable to vary or to increase throughput?
Also there are three parameter in socket buffer: Min, Default and Max. Which one should I vary to my experiment and to get more relevance?
If I want to send a packet of 64,256,512 etc. bytes , Declared buffer inside code of that much space and send over socket .Each execution of send() command will send one packet of that much size.
Only if you disable the Nagle algorithm and the size is less than the path MTU. You mustn't rely on this.
So if I want to Study the Packet size variation effect on throughput, What I have to do , vary buffer space in Code?
No. Vary SO_RCVBUF at the receiver. This is the single biggest determinant of throughput, as it determines the maximum receive window.
what are the socket buffer which we set using SO_SNDBUF and SO_RCVBUF
Send buffer size at the sender, and receive buffer size at the receiver. In the kernel.
It's Same as TCP Window size
See above.
or else different ? Which parameter is more suitable to vary to increase throughput ?
See above.
Also there are three parameter in Socket Buffer min Default and Max . Which one should I vary for My experiment to get more relevance
None of them. These are the system-wide parameters. Just play with SO_SNDBUF and SO_RCVBUF for the specific sockets in your application.
TCP does not directly expose a way to control the way packets are sent since it is a stream protocol. But you can make the TCP stack send packets by disabling the Nagle algorithm. That way all data that you send will be sent out immediately instead of being buffered. Data will be split into packets of MTU size which is like ~1400 bytes. Depends on the link.
To answer (2): Disable nagling and invoke send with buffers of < 1400 bytes. Use Wireshark to make sure you got what you wanted.
The buffer settings have nothing to do with any of this. I know of no valid reason to touch them.
In general this question is probably moot since you seem to want to send a lot of data. Just leave Nagling enabled and send big buffers (such as 64KB).
I do some experience on Windows 10:
code from https://docs.python.org/3/library/socketserver.html#asynchronous-mixins,
RawCap for loopback capture,
WireShark for watching result.
The primary client code is:
def client(ip, port, message):
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.setsockopt(socket.SOL_SOCKET,socket.SO_RCVBUF, 100000)
sock.connect((ip, port))
sock.sendall(bytes(message, 'ascii'))
response = str(sock.recv(1024), 'ascii')
print("Received: {}".format(response))
Here is the result(the server port is 11111):
you can see, the tcp recive window size is the same as SO_RCVBUF, may it is platform indepent, you can verify it on other platform.
on https://msdn.microsoft.com/en-us/library/windows/hardware/ff570832(v=vs.85).aspx
The SO_RCVBUF socket option determines the size of a socket's receive buffer that is used by the underlying transport.
verified this.
Also, when I set SO_SNDBUF = 100000, it have no affects on the tcp transmission between client and server, as server just can discard data if client send much data one time.
So, if you want to change SO_RCVBUF to max Throughput, you can refer http://packetbomb.com/understanding-throughput-and-tcp-windows/, the os may offer func to detect ideal send backlog (ISB).
I want to read IP packets from a non-blocking tun/tap file descriptor tunfd
I set the tunfd as non-blocking and register a READ_EV event for it in libevent.
when the event is triggered, I read the first 20 bytes first to get the IP header, and then
read the rest.
nr_bytes = read(tunfd, buf, 20);
...
ip_len = .... // here I get the IP length
....
nr_bytes = read(tunfd, buf+20, ip_len-20);
but for the read(tunfd, buf+20, ip_len-20)
I got EAGAIN error, actually there should be a full packet,
so there should be some bytes,
why I get such an error?
tunfd is not compatible with non-blocking mode or libevent?
thanks!
Reads and writes with TUN/TAP, much like reads and writes on datagram sockets, must be for complete packets. If you read into a buffer that is too small to fit a full packet, the buffer will be filled up and the rest of the packet will be discarded. For writes, if you write a partial packet, the driver will think it's a full packet and deliver the truncated packet through the tunnel device.
Therefore, when you read a TUN/TAP device, you must supply a buffer that is at least as large as the configured MTU on the tun or tap interface.
I starts learning TCP protocol from internet and having some experiments. After I read an article from http://www.diffen.com/difference/TCP_vs_UDP
"TCP is more reliable since it manages message acknowledgment and retransmissions in case of lost parts. Thus there is absolutely no missing data."
Then I do my experiment, I write a block of code with TCP socket:
while( ! EOF (file))
{
data = read_from(file, 5KB); //read 5KB from file
write(data, socket); //write data to socket to send
}
I think it's good because "TCP is reliable" and it "retransmissions lost parts"... But it's not good at all. A small file is OK but when it comes to about 2MB, sometimes it's OK but not always...
Now, I try another one:
while( ! EOF (file))
{
wait_for_ACK();//or sleep 5 seconds
data = read_from(file, 5KB); //read 5KB from file
write(data, socket); //write data to socket to send
}
It's good now...
All I can think of is that the 1st one fails because of:
1. buffer overflow on sender because the sending rate is slower than the writing rate of the program (the sending rate is controlled by TCP)
2. Maybe the sending rate is greater than writing rate but some packets are lost (after some retransmission, still fails and then TCP gives up...)
Any ideas?
Thanks.
TCP will ensure that you don't lose data but you should check how many bytes actually got accepted for transmission... the typical loop is
while (size > 0)
{
int sz = send(socket, bufptr, size, 0);
if (sz == -1) ... whoops, error ...
size -= sz; bufptr += sz;
}
when the send call accepts some data from your program then it's a job of the OS to get that to destination (including retransmission), but the buffer for sending may be smaller than the size you need to send, and that's why the resulting sz (number of bytes accepted for transmission) may be less than size.
It's also important to consider that sending is asynchronous, i.e. after the send function returns the data is not already at the destination, it's has been only assigned to the TCP transport system to be delivered. If you want to know when it will be received then you'll have to use other systems (e.g. a reply message from your counterpart).
You have to check write(socket) to make sure it writes what you ask.
Loop until you've sent everything or you've calculated a time out.
Do not use indefinite timeouts on socket read/write. You're asking for trouble if you do, especially on Windows.