Will TCP connection lose packets? - sockets

Say Server S have a successful TCP connection with Client C.
C is keep sending 256-byte-long packets to S.
Is it possible that one of packets only receive part of it, but the connection does not break (Can continue receive new packets correctly)?
I thought TCP protocol itself will guarantee not lose any bytes while connecting. But seems not?
P.S. I'm using Python's socketserver library.

The TCP protocol does guarantee delivery. Thus (assuming there are no bugs in your code and in the TCP stack), the scenario you describe is impossible.
Do bear in mind that TCP is stream- rather than packet-oriented. This means that you may need to call recv() multiple times to read the entire 256-byte packet.

As #NPE said, TCP is a stream oriented protocol, that means that there is no guarantee on how much data bytes are sent in each TCP packet nor how many bytes are available for reading in the receiving socket. What TCP ensures is that the receiving socket will be provided with the data bytes in the same order that they were sent.
Consider a communication through a TCP connection socket between two hosts A and B.
When the application in A requests to send 256 bytes, for example, the A's TCP stack can send them in one, or several individual packets, or even wait before sending them. So, B may receive one or several packets with all or part of the bytes requested to be sent by A, and so, when the application in B is notified of the availability of received bytes, it's not sure that it could read at once the 256 bytes.
The only guaranteed thing is that the bytes B reads are in the same order that A sent them.

Related

Using UDP socket with poll. Blocking vs Non-blocking

A lot of examples can be found about non blocking TCP sockets, however I find it difficult to find good explanation about how UDP sockets should be handled with poll/select/epoll system calls.
Blocking or non blocking ?
When we have to deal with tcp sockets then it makes sense to set them to non blocking, since it takes only one slow client connection to prevent the server from serving other clients. However, there are no ACK messages in UDP, so my assumption is that writing to UDP should be fast enough for both cases. Does that mean that we can safely use blocking UDP socket with the family of poll system calls if each time we are going to send small amount of data (10Kb for example)? From this discussion I assume that ARP request is the only point that can substantially block the sendto function, however isn't it a one time thing?
Return value of sendto
Let's say the socket is non-blocking. Can there be a scenario that I try to send 1000 bytes of data, and the sendto function sends only some part of it (say 300 bytes)? Does that mean that it has just sent a UDP packet with 300 bytes, and next time I use sendto I have to consider that it will send in a new UDP packet again? Is this situation still possible for blocking sockets?
Return value of recvfrom
The same question applies for recvfrom. Can there be a situation that I will need to call recvfrom more than once to obtain the full UDP packet. Is
that behaviour different for blocking/non-blocking sockets.

Demultiplexing udp streams from different sources

My server is using a single udp socket to receive udp streams from different ip addresses. (All senders send to the same port).
When recv returns on the server with a chunk of data, might that chunk contain bytes from different sources?
Assuming not, is there a reliable way to determine which sender sent that entire chunk?
In UDP, each chunk received will be exactly what a sender previously passed to ‘send()’ or ‘sendto()’ — unlike TCP, UDP maintains message boundaries.
You can find out the IP address and port the received packet was sent from by calling ‘recvfrom()’ instead of ‘recv()’. Those values will be written into the ‘struct inaddr_in’ that you provide a pointer to.

Does TCP ensure packet is received by sequence that server send it

I'm working on an gameServer that communicate with game client, but wonder whether the packet server send to client remain sequence when client received it ?
like server sends packets A,B,C
but the client received B,A,C ?
I have read the great blog http://packetlife.net/blog/2010/jun/7/understanding-tcp-sequence-acknowledgment-numbers/
It seems that every packet send by the server has an ack corresponding by client, but it does not say why the packet received by client has the same sequence with server
It's worth reading TCP's RFC, particularly section 1.5 (Operation), which explains the process. In part, it says:
The TCP must recover from data that is damaged, lost, duplicated, or delivered out of order by the internet communication system. This is achieved by assigning a sequence number to each octet transmitted, and requiring a positive acknowledgment (ACK) from the receiving TCP. If the ACK is not received within a timeout interval, the data is retransmitted. At the receiver, the sequence numbers are used to correctly order segments that may be received out of order and to eliminate duplicates. Damage is handled by adding a checksum to each segment transmitted, checking it at the receiver, and discarding damaged segments.
I don't see where it's ever made explicit, but since the acknowledgement (as described in section 2.6) describes the next expected packet, the receiving TCP implementation is only ever acknowledging consecutive sequences of packets from the beginning. That is, if you never receive the first packet, you never send an acknowledgement, even if you've received all other packets in the message; if you've received 1, 2, 3, 5, and 6, you only acknowledge 1-3.
For completeness, I'd also direct your attention to section 2.6, again, after it describes the above-quoted section in more detail:
An acknowledgment by TCP does not guarantee that the data has been delivered to the end user, but only that the receiving TCP has taken the responsibility to do so.
So, TCP ensures the order of packets, unless the application doesn't receive them. That exception probably wouldn't be common, except for cases where the application is unavailable, but it does mean that an application shouldn't assume that a successful send is equivalent to a successful reception. It probably is, for a variety of reasons, but it's explicitly outside of the protocol's scope.
TCP guarantees sequence and integrity of the byte stream. You will not receive data out of sequence. From RFC 793:
Reliable Communication: A stream of data sent on a TCP connection is delivered reliably and in
order at the destination.

socket programming, what happen if I write data more than one TCP/UDP packet can carry?

I have a question about socket programming. When I use socket to send the data, we can use the API such as sendto() to send using TCP or UDP.
For sendto(), we give a array pointer and the byte number we want to send.
In this case, if I gave a large byte number (e.g.: 20000 bytes), based on my understanding, MTU of the network will not be that big so socket actually send mutiple packets instead of one big packet. Since these 20000 bytes are split into several UDP/TCP packets, but will these 20000 bytes be seen as one packet at beginning? Is this process UDP/TCP fragmentation ?
My another question is if I put the data size smaller than MTU into sendto(), then I can gurantee call sendto() once, socket only sends one TCP/UDP packet?
Thanks in advance.
will these 20000 bytes be seen as one packet at beginning? Is this process UDP/TCP fragmentation?
UDP will send it as one datagram if your socket send buffer is large enough to hold it. Otherwise you will get EMSGSIZE. It may subsequently get fragmented at the IP layer, and if a fragment gets lost so does the whole datagram, but if all the fragments arrive the entire datagram will be received intact.
TCP wil send it all, segmenting and fragmenting it however it sees fit. It will all arrive, intact and in order, unless there is a long enough network outage.
My another question is if I put the data size smaller than MTU into sendto(), then I can guarantee call sendto() once, socket only sends one TCP/UDP packet?
UDP: yes.
TCP: no.

How will TCP protocol delay packets transferring when one of packets is dropped?

If client socket sends:
Packet A - dropped
Packet B
Packet C
Will server socket receive and queue B and C and then when A is received B and C will be passed to the server application immediately? Or B and C will be resent too? Or no packets will be sent at all until A is delivered?
TCP is a sophisticated protocol that changes many parameters depending on the current network state, there are whole books written about the subject. The clearest way to answer your question is to say that TCP generally maintains a given send 'window' size in bytes. This is the amount of data that will be sent until previously sent acknowledgments are successfully returned.
In older TCP specifications a dropped packet within that window would result in a complete resend of data from the dropped packet onwards. To solve this problem as it's obviously a little wasteful, TCP now employs a selective acknowledgment (SACK) option (RFC 2018). This would result in just the lost/corrupted packet being resent.
Back to your example, assuming the window size is large enough to encompass all three packets, and providing you are taking advantage of the latest TCP standard (don't see why you wouldn't), if packet A were dropped only packet A would be resent. If all packets are individually larger than the window then the packets must be sent and acknowledged sequentially.
It depends on the latencies. In general, first A is resent. If the client gets it and already has B and C, it can acknowledge them as well.
If this happens fast enough, B and C won't be resent, or maybe only B.