Does UDP allow two clients to connect at the same time to a server port?
DatagramSocket udp1 = new DatagramSocket(8000); // = localhost:8000 <-> ?
DatagramSocket udp2 = new DatagramSocket(8000);
What happens if udp1 and udp2 are created from two different IPs and send data at the same time?
Will it cause any issue?
Note: UDP doesn't really have a concept of "connect", just sending and receiving packets. (e.g. if making a TCP connection is analogous to making telephone call, then sending a UDP packet is more like mailing a letter).
Regarding two sockets arriving at the same UDP port on a server at the same time: the TCP/IP stack keeps a fixed-size receive-buffer for each socket that the server creates, and whenever a packet arrives at the port that socket is bound to, the packet is placed into that buffer. Then the server program is woken up and can recv() the data whenever it cares to do so. So in most cases, both packets will be placed into the buffer and then recv()'d and processed by the server program. The exception would be if there is not enough room left in the buffer for one or both of the packets to fit into it (remember it's a fixed-size buffer); in that case, the packet(s) that wouldn't fit into the buffer will simply be dropped and never seen again.
Related
A lot of examples can be found about non blocking TCP sockets, however I find it difficult to find good explanation about how UDP sockets should be handled with poll/select/epoll system calls.
Blocking or non blocking ?
When we have to deal with tcp sockets then it makes sense to set them to non blocking, since it takes only one slow client connection to prevent the server from serving other clients. However, there are no ACK messages in UDP, so my assumption is that writing to UDP should be fast enough for both cases. Does that mean that we can safely use blocking UDP socket with the family of poll system calls if each time we are going to send small amount of data (10Kb for example)? From this discussion I assume that ARP request is the only point that can substantially block the sendto function, however isn't it a one time thing?
Return value of sendto
Let's say the socket is non-blocking. Can there be a scenario that I try to send 1000 bytes of data, and the sendto function sends only some part of it (say 300 bytes)? Does that mean that it has just sent a UDP packet with 300 bytes, and next time I use sendto I have to consider that it will send in a new UDP packet again? Is this situation still possible for blocking sockets?
Return value of recvfrom
The same question applies for recvfrom. Can there be a situation that I will need to call recvfrom more than once to obtain the full UDP packet. Is
that behaviour different for blocking/non-blocking sockets.
I have a client application that uses IOCP for socket communication. I'm using ConnectEx to make the TCP connection to the remote endpoing (binding the socket to ADDR_ANY and port 0 before calling ConnectEx).
It will be valid to have two connections to the same remote endpoint (same IP address and port number). When I test that condition with my current code, I have two overlapped IO read operations outstanding (one on each connected socket) from calls to WSARecv(). Each WSARecv() is called with the correct socket and overlapped structure. For example: WSARecv(socket1, ... overlapped1) and WSARecv(socket2, ... overlapped2). The problem I've run into is that when I get a response back from either remote, it triggers the completion event for both of the outstanding overlapped operations. My code only produces this result when two remotes have the same ip address and port number, not when there are two unique remote addresses. Is this the expected behavior (hopefully not)? If so, is there another way to accomplish this?
I'm posting an answer, even though it is really just an explanation of why the problem happened.
My test involved connecting to and communicating with a remote device that provides data. It turns out that it is on the other side of a digi terminal server. So the connection path was:
my test computer (via TCP) -> Digi terminal server (via Serial) -> remote device.
The digi terminal server basically converts TCP/IP to serial communications, and back. Since the serial side doesn't have a concept of 'connectedness' the digi doesn't know which TCP/IP connection should receive the serial data in response to a TCP/IP request, so it forwards the serial data to all active connections on the TCP/IP side. That's what was producing the IOCP trigger on both of my pending overlapped operations. Every time a request was sent to the digi, it sent the request out of its serial port. When the end device responded, the digit forwarded the response data to each of my TCP/IP connections.
Thanks to everyone who commented on my question, but sorry for taking up your time.
I have a question about socket programming. When I use socket to send the data, we can use the API such as sendto() to send using TCP or UDP.
For sendto(), we give a array pointer and the byte number we want to send.
In this case, if I gave a large byte number (e.g.: 20000 bytes), based on my understanding, MTU of the network will not be that big so socket actually send mutiple packets instead of one big packet. Since these 20000 bytes are split into several UDP/TCP packets, but will these 20000 bytes be seen as one packet at beginning? Is this process UDP/TCP fragmentation ?
My another question is if I put the data size smaller than MTU into sendto(), then I can gurantee call sendto() once, socket only sends one TCP/UDP packet?
Thanks in advance.
will these 20000 bytes be seen as one packet at beginning? Is this process UDP/TCP fragmentation?
UDP will send it as one datagram if your socket send buffer is large enough to hold it. Otherwise you will get EMSGSIZE. It may subsequently get fragmented at the IP layer, and if a fragment gets lost so does the whole datagram, but if all the fragments arrive the entire datagram will be received intact.
TCP wil send it all, segmenting and fragmenting it however it sees fit. It will all arrive, intact and in order, unless there is a long enough network outage.
My another question is if I put the data size smaller than MTU into sendto(), then I can guarantee call sendto() once, socket only sends one TCP/UDP packet?
UDP: yes.
TCP: no.
I would like to make two way communication using TCP and UDP socket in linux. The idea is like the following. This is a kind of sensor network.
server side
while loop (
(1)check if there is incoming TCP control message
if yes, update the system based on control message
for all other time, keep spamming out UDP messages
)
client side
while (
keep receiving the UDP broadcast message
once it receives 100 UDP messages, it has to send a TCP control message to server
)
The (1) part is the only place that I cannot work out. I find that if I use non blocking TCP socket with select() on the (1) part for short interval, the system will soon return 0 and the control message is not received. Either I would set a long interval for select, but it will block the line and the UDP message cannot send it out. I want the UDP message sending out effectively , provided that the server can also notice the client TCP control message at any tinme.
Could anyone give me some hints on (1) part.
You should only attempt a recv() if the correspond readFD is set after select(). If select() returns zero, none of them is set: the timeout has expired, so you shouldn't do so anything except send your UDP message.
There was this question posted in class today about API design in socket programming.
Why are listen() and accept() provided as different functions and not merged into one function?
Now as far as I know, listen marks a connected socket as ready to accept connections and sets a max bound on the queue of incoming connections. If accept and listen are merged, can such a queue not be maintained?
Or is there some other explanation?
Thanks in advance.
listen() means "start listening for clients"
accept() means "accept a client, blocking until one connects if necessary"
It makes sense to separate these two, because if they were merged, then the single merged function would block. This would cause problems for non-blocking I/O programs.
For example, lets take a typical server that wants to listen for new client connections, but also monitor existing client connections for new messages. A server like this typically uses a non-blocking I/O model so that it is not blocked on any one particular socket. So it needs a way to "start listening" on the server socket without blocking on it. Once listening on the server socket has been initiated, the server socket is added to the bucket of sockets being monitored via select() (called poll() on some systems). The select() call would indicate when there is a client pending on the server socket. Then the program can then call accept() without fear of blocking on that socket.
listen(2) makes given TCP socket a server socket, i.e. creates a queue for accepting connection requests from the clients. Only the listening side port, and possibly IP address, are bound (thus you need to call bind(2) before listen(2)). accept(2) then actually takes such connection request from that queue and turns it into a connected socket (four parts required for two-way communication - source IP address, source port number, destination IP address, and destination port number - are assigned). listen(2) is called only once, while accept(2) is usually called multiple times.
Under the hood, bind assigns an address and a port to a socket descriptor. It means the port is now reserved for that socket, and therefore the system won't be able to assign the same port to another application (an exception exists, but I won't go into details here). It's also a one-time-per-socket operation.
Then listen is responsible for establishing the number of connections that can be queued for a given socket descriptor, and indicate that you're now willing to receive connections.
On the other hand, accept is used to dequeue the first connection from the queue of pending connections, and create a new socket to handle further communication through it. It may be called multiple times, and generally is. By default, this operation is blocking if there are no connections in the queue.
Now suppose you want to use an async IO mechanism (like epoll, poll, kqueue, select, etc). If listen and accept were a single API, how would you indicate that a given socket is willing to receive connections? The async mechanism needs to know you wish to handle this type of event as well.
With quite different semantics, it makes sense to have them apart.