My server is using a single udp socket to receive udp streams from different ip addresses. (All senders send to the same port).
When recv returns on the server with a chunk of data, might that chunk contain bytes from different sources?
Assuming not, is there a reliable way to determine which sender sent that entire chunk?
In UDP, each chunk received will be exactly what a sender previously passed to ‘send()’ or ‘sendto()’ — unlike TCP, UDP maintains message boundaries.
You can find out the IP address and port the received packet was sent from by calling ‘recvfrom()’ instead of ‘recv()’. Those values will be written into the ‘struct inaddr_in’ that you provide a pointer to.
Related
Successfull sendto system call assumes that data is placed into a socket buffer.
Kernel documentation assumes that data stored in a socket buffer can somehow be reordered.
Packets can be reordered in the transmit path, for instance in the packet scheduler.
I cannot imagine when this can happen for a UDP socket. I was assuming that for a UDP socket its send buffer works as sort of a queue and data which is requested to be sent is never reordered.
So can you please explain how it can happen? Ideally I want to get some understanding to what extent this reordering can take place.
P.S. I am interested in sender side only. Because packets sent can be rerouted to MSG_ERRQUEUE using timestamping feature. I understand that nothing is guaranteed on receiver side.
I'm working on an gameServer that communicate with game client, but wonder whether the packet server send to client remain sequence when client received it ?
like server sends packets A,B,C
but the client received B,A,C ?
I have read the great blog http://packetlife.net/blog/2010/jun/7/understanding-tcp-sequence-acknowledgment-numbers/
It seems that every packet send by the server has an ack corresponding by client, but it does not say why the packet received by client has the same sequence with server
It's worth reading TCP's RFC, particularly section 1.5 (Operation), which explains the process. In part, it says:
The TCP must recover from data that is damaged, lost, duplicated, or delivered out of order by the internet communication system. This is achieved by assigning a sequence number to each octet transmitted, and requiring a positive acknowledgment (ACK) from the receiving TCP. If the ACK is not received within a timeout interval, the data is retransmitted. At the receiver, the sequence numbers are used to correctly order segments that may be received out of order and to eliminate duplicates. Damage is handled by adding a checksum to each segment transmitted, checking it at the receiver, and discarding damaged segments.
I don't see where it's ever made explicit, but since the acknowledgement (as described in section 2.6) describes the next expected packet, the receiving TCP implementation is only ever acknowledging consecutive sequences of packets from the beginning. That is, if you never receive the first packet, you never send an acknowledgement, even if you've received all other packets in the message; if you've received 1, 2, 3, 5, and 6, you only acknowledge 1-3.
For completeness, I'd also direct your attention to section 2.6, again, after it describes the above-quoted section in more detail:
An acknowledgment by TCP does not guarantee that the data has been delivered to the end user, but only that the receiving TCP has taken the responsibility to do so.
So, TCP ensures the order of packets, unless the application doesn't receive them. That exception probably wouldn't be common, except for cases where the application is unavailable, but it does mean that an application shouldn't assume that a successful send is equivalent to a successful reception. It probably is, for a variety of reasons, but it's explicitly outside of the protocol's scope.
TCP guarantees sequence and integrity of the byte stream. You will not receive data out of sequence. From RFC 793:
Reliable Communication: A stream of data sent on a TCP connection is delivered reliably and in
order at the destination.
I have a question about socket programming. When I use socket to send the data, we can use the API such as sendto() to send using TCP or UDP.
For sendto(), we give a array pointer and the byte number we want to send.
In this case, if I gave a large byte number (e.g.: 20000 bytes), based on my understanding, MTU of the network will not be that big so socket actually send mutiple packets instead of one big packet. Since these 20000 bytes are split into several UDP/TCP packets, but will these 20000 bytes be seen as one packet at beginning? Is this process UDP/TCP fragmentation ?
My another question is if I put the data size smaller than MTU into sendto(), then I can gurantee call sendto() once, socket only sends one TCP/UDP packet?
Thanks in advance.
will these 20000 bytes be seen as one packet at beginning? Is this process UDP/TCP fragmentation?
UDP will send it as one datagram if your socket send buffer is large enough to hold it. Otherwise you will get EMSGSIZE. It may subsequently get fragmented at the IP layer, and if a fragment gets lost so does the whole datagram, but if all the fragments arrive the entire datagram will be received intact.
TCP wil send it all, segmenting and fragmenting it however it sees fit. It will all arrive, intact and in order, unless there is a long enough network outage.
My another question is if I put the data size smaller than MTU into sendto(), then I can guarantee call sendto() once, socket only sends one TCP/UDP packet?
UDP: yes.
TCP: no.
Does UDP allow two clients to connect at the same time to a server port?
DatagramSocket udp1 = new DatagramSocket(8000); // = localhost:8000 <-> ?
DatagramSocket udp2 = new DatagramSocket(8000);
What happens if udp1 and udp2 are created from two different IPs and send data at the same time?
Will it cause any issue?
Note: UDP doesn't really have a concept of "connect", just sending and receiving packets. (e.g. if making a TCP connection is analogous to making telephone call, then sending a UDP packet is more like mailing a letter).
Regarding two sockets arriving at the same UDP port on a server at the same time: the TCP/IP stack keeps a fixed-size receive-buffer for each socket that the server creates, and whenever a packet arrives at the port that socket is bound to, the packet is placed into that buffer. Then the server program is woken up and can recv() the data whenever it cares to do so. So in most cases, both packets will be placed into the buffer and then recv()'d and processed by the server program. The exception would be if there is not enough room left in the buffer for one or both of the packets to fit into it (remember it's a fixed-size buffer); in that case, the packet(s) that wouldn't fit into the buffer will simply be dropped and never seen again.
Say Server S have a successful TCP connection with Client C.
C is keep sending 256-byte-long packets to S.
Is it possible that one of packets only receive part of it, but the connection does not break (Can continue receive new packets correctly)?
I thought TCP protocol itself will guarantee not lose any bytes while connecting. But seems not?
P.S. I'm using Python's socketserver library.
The TCP protocol does guarantee delivery. Thus (assuming there are no bugs in your code and in the TCP stack), the scenario you describe is impossible.
Do bear in mind that TCP is stream- rather than packet-oriented. This means that you may need to call recv() multiple times to read the entire 256-byte packet.
As #NPE said, TCP is a stream oriented protocol, that means that there is no guarantee on how much data bytes are sent in each TCP packet nor how many bytes are available for reading in the receiving socket. What TCP ensures is that the receiving socket will be provided with the data bytes in the same order that they were sent.
Consider a communication through a TCP connection socket between two hosts A and B.
When the application in A requests to send 256 bytes, for example, the A's TCP stack can send them in one, or several individual packets, or even wait before sending them. So, B may receive one or several packets with all or part of the bytes requested to be sent by A, and so, when the application in B is notified of the availability of received bytes, it's not sure that it could read at once the 256 bytes.
The only guaranteed thing is that the bytes B reads are in the same order that A sent them.