how SCTP is better then TCP ? (in traffic scenario) - sockets

SCTP send single file using multiple stream and TCP send single file using single stream.
now question is "
how SCTP is better then TCP ?
" (in traffic scenario)

SCTP is not "better" than TCP in any way, but it does something different.
TCP emulates a reliable, ordered stream of octets over an unreliable unordered packet transport, which is conceptually very similar to reading from a file (without the ability to seek).
SCTP emulates a reliable in order distinct message delivery system (where "message" means as much as a defined chunk of data of some known length). Like UDP, it delivers one complete message at a time. Like TCP, it guarantees that messages arrive, and in the relative order in which they were sent.
SCTP has the ability to send different messages in separate streams, which allows to reduce latency, prevents head-of-line blocks, and makes better use of the available bandwidth in some scenarios. A web page with style information and images is the classic examples.
It does not send a single file via multiple streams (which would not make any sense).
(There are a few other features that I'm not naming because they have little relevance to the question)

SCTP can be considered as mix of UDP and TCP because it is message based like UDP and connection oriented like TCP to ensure in sequence delivery of messages along with congestion control mechanism. That is, SCTP is connection oriented but operates at message level.
It involves bundling of several connections into a single SCTP association operating on messages or chunks rather than bytes. This capability of SCTP to transmit several independent streams of chunks in parallel is referred to as Multistreaming which avoids head of line blocking. That is, in case of TCP, even though 3rd & 4th packet are fine, but if 2nd packet was lost, TCP shall do retransmission due to which 3rd and 4th packet has to wait until the 2nd packet is successfully/correctly received. However, incase of SCTP, this head of line blocking is reduced as single association is split into multiple independent streams of chunks(messages).
Also, note that SCTP facilitates to send messages in unordered mode too, which can completely avoid the head of line blocking wherein the upper layers should have mechanism for reordering of messages if required.

Related

tcp or udp for a game server?

I know, I know. This question has been asked many times before. But I've spent an hour googling now without finding what I am looking for so I will ask it again and mention my context along with what makes the decision hard for me:
I am writing the server for a game where the response time is very important and a packet loss every now and then isn't a problem.
Judging by this and the fact that I as a server mostly have to send the same data to many different clients, the obvious answer would be UDP.
I had already started writing the code when I came across this:
In some applications TCP is faster (better throughput) than UDP.
This is the case when doing lots of small writes relative to the MTU size. For example, I read an experiment in which a stream of 300 byte packets was being sent over Ethernet (1500 byte MTU) and TCP was 50% faster than UDP.
In my case the information units I'm sending are <100 bytes, which means each one fits into a single UDP packet (which is quite pleasant for me because I don't have to deal with the fragmentation) and UDP seems much easier to implement for my purpose because I don't have to deal with a huge amount of single connections, but my top priority is to minimize the time between
client sends something to server
and
client receives response from server
So I am willing to pick TCP if that's the faster way.
Unfortunately I couldn't find more information about the above quoted case, which is why I am asking: Which protocol will be faster in my case?
UDP is still going to be better for your use case.
The main problem with TCP and games is what happens when a packet is dropped. In UDP, that's the end of the story; the packet is dropped and life continues exactly as before with the next packet. With TCP, data transfer across the TCP stream will stop until the dropped packet is successfully retransmitted, which means that not only will the receiver not receive the dropped packet on time, but subsequent packets will be delayed also -- most likely they will all be received in a burst immediately after the resend of the dropped packet is completed.
Another feature of TCP that might work against you is its automatic bandwidth control -- i.e. TCP will interpret dropped packets as an indication of network congestion, and will dial back its transmission rate in response; potentially to the point of dialing it down to near zero, in cases where lots of packets are being lost. That might be useful if the cause really was network congestion, but dropped packets can also happen due to transient network errors (e.g. user pulled out his Ethernet cable for a couple of seconds), and you might not want to handle those problems that way; but with TCP you have no choice.
One downside of UDP is that it often takes special handling to get incoming UDP packets through the user's firewall, as firewalls are often configured to block incoming UDP packets by default. For an action game it's probably worth dealing with that issue, though.
Note that it's not a strict either/or option; you can always write your game to work over both TCP and UDP, and either use them simultaneously, or let the program and/or the user decide which one to use. That way if one method isn't working well, you can simply use the other one, and it only takes twice as much effort to implement. :)
In some applications TCP is faster (better throughput) than UDP. This
is the case when doing lots of small writes relative to the MTU size.
For example, I read an experiment in which a stream of 300 byte
packets was being sent over Ethernet (1500 byte MTU) and TCP was 50%
faster than UDP.
If this turns out to be an issue for you, you can obtain the same efficiency gain in your UDP protocol by placing multiple messages together into a single larger UDP packet. i.e. instead of sending 3 100-byte packets, you'd place those 3 100-byte messages together in 1 300-byte packet. (You'd need to make sure the receiving program is able to correctly intepret this larger packet, of course). That's really all that the TCP layer is doing here, anyway; placing as much data into the outgoing packets as it has available and can fit, before sending them out.

Reliability of small UDP packet on localhost communication

One of the reason that UDP is not a good choice even for localhost communication is due to out of ordering, but if I can limit the size of datagram that fragmentation would not occur,
e.g. limit to 1KB of data, so can I assume that the reliability of UDP is the same as TCP?
[1] Why do I get UDP datagrams out of order even with processes runnning locally?
No, it's not the same.
Getting in-sequence packets is not the only thing that comes into picture when you talk about reliability, there's more to it.
From RFC 768 (User Datagram Protocol):
This protocol provides a procedure for application programs to
send messages to other programs with a minimum of protocol
mechanism. The protocol is transaction oriented, and delivery and
duplicate protection are not guaranteed. Applications requiring
ordered reliable delivery of streams of data should use the
Transmission Control Protocol (TCP) [2].
So, by keeping small size for datagrams, you may ensure that the out-of-order delivery never happens but still you can't ensure the data is correctly received at the other end. This still holds good even if you are sending data on a local host. A bit-error can occur for any unknown reason, that's why you have check-sum in the header. If the check-sum at the receiving end doesn't match then the packet is discarded without the sender knowing about it. This doesn't happen in TCP since the receiver sends an ACK to the sender on receiving the correct data.

Does UDP allow repacketization?

I know that for TCP you can have for example Nagle's Algorithm enabled. However, can you have something similar for UDP?
Practical Question(assume UDP socket):
If I call send() two times in a short period of time with 1 byte of data in each send() call. Is it possible that the transport layer decides to send only 1 UPD packet with the 1 byte + 1 byte = 2 bytes of data?
Thanks in advance!
No. UDP datagrams are delivered intact exactly as sent, or not at all.
Not according to the RFC (RFC 768). Above IP facilities themselves, UDP really only provides, as extras, port-based routing and a little bit of extra detection for corruption or misrouting.
That means there's no facility to combine datagrams. In fact, since it's meant to be transaction oriented, I would say that combining two transactions into one may well be a bad idea in terms of keeping these transaction disparate.
Otherwise, you would need a layer above UDP which could figure out how to extract these transactions from a datagram. At the moment, that's not necessary since the datagram is the transaction.
As added support (though not, of course, definitive) for this contention, see the UDP wikipedia page:
Datagrams – Packets are sent individually and are checked for integrity only if they arrive. Packets have definite boundaries which are honored upon receipt, meaning a read operation at the receiver socket will yield an entire message as it was originally sent.
However, the best support for it comes from one of its clients. UDP was specially engineered for TFTP (among other things) and that protocol breaks down if you cannot distinguish a transaction.
Specifically, one of the TFTP transaction types is the data transaction which consists of an opcode, block number and up to 512 bytes of data. Without a length indication at the start or a sentinel value at the end, there is no way to work out where the next transaction would start unless there is a one-to-one mapping between transaction and datagram.
As an aside, the other four TFTP transaction types have either a fixed length or end-of-string sentinel values but the data transaction is the decider here.

Sending And Receiving Sockets (TCP/IP)

I know that it is possible that multiple packets would be stacked to the buffer to be read from and that a long packet might require a loop of multiple send attempts to be fully sent. But I have a question about packaging in these cases:
If I call recv (or any alternative (low-level) function) when there are multiple packets awaiting to be read, would it return them all stacked into my buffer or only one of them (or part of the first one if my buffer is insufficient)?
If I send a long packet which requires multiple iterations to be sent fully, does it count as a single packet or multiple packets? It's basically a question whether it marks that the package sent is not full?
These questions came to my mind when I thought about web sockets packaging. Special characters are used to mark the beginning and end of a packet which sorta leads to a conclusion that it's not possible to separate multiple packages.
P.S. All the questions are about TCP/IP but you are welcomed to share information (answers) about UDP as well.
TCP sockets are stream based. The order is guaranteed but the number of bytes you receive with each recv/read could be any chunk of the pending bytes from the sender. You can layer a message based transport on top of TCP by adding framing information to indicate the way that the payload should be chunked into messages. This is what WebSockets does. Each WebSocket message/frame starts with at least 2 bytes of header information which contains the length of the payload to follow. This allows the receiver to wait for and re-assemble complete messages.
For example, libraries/interfaces that implement the standard Websocket API or a similar API (such as a browser), the onmessage event will fire once for each message received and the data attribute of the event will contain the entire message.
Note that in the older Hixie version of WebSockets, each frame was started with '\x00' and terminated with '\xff'. The current standardized IETF 6455 (HyBi) version of the protocol uses the header information that contains the length which allows much easier processing of the frames (but note that both the old and new are still message based and have basically the same API).
TCP connection provides for stream of bytes, so treat it as such. No application message boundaries are preserved - one send can correspond to multiple receives and the other way around. You need loops on both sides.
UDP, on the other hand, is datagram (i.e. message) based. Here one read will always dequeue single datagram (unless you mess with low-level flags on the socket). Event if your application buffer is smaller then the pending datagram and you read only a part of it, the rest of it is lost. The way around it is to limit the size of datagrams you send to something bellow the normal MTU of 1500 (less IP and UDP headers, so actually 1472).

What is a message boundary?

What is "message bonudaries" in the following context?
One difference between TCP and UDP is that UDP preserves message
boundaries.
I understand the difference between TCP and UDP, but unsure about the definition of "message boundaries". Since UDP includes the destination and port information in each individual packet, could it be this that gives message a "boundary"?
No, message boundaries have nothing to do with destinations or ports. A "message boundary" is the separation between two messages being sent over a protocol. UDP preserves message boundaries. If you send "FOO" and then "BAR" over UDP, the other end will receive two datagrams, one containing "FOO" and the other containing "BAR".
If you send "FOO" and then "BAR" over TCP, no message boundary is preserved. The other end might get "FOO" and then "BAR". Or it might get "FOOBAR". Or it might get "F" and then "OOB" and then "AR". TCP does not make any attempt to preserve application message boundaries -- it's just a stream of bytes in each direction.
Message boundaries in this context is simply the start & end of the message/packet. With TCP connections, all messages/packets are combined into a continuous stream of data, whereas with UDP the messages are given to you in their original form. They will have an exact size in bytes.
You should be careful to not confuse application-level and protocol-level message/packet boundaries. They are very different things. The question does not clearly distinguish between very different concepts.
As a cheat, on an isolated subnet, with small messages, when reliable and ordered delivery is not required - you can cheat and use UDP. A single UDP send/receive will always contain one message - which is simpler to code.
When reliable and ordered delivery is required, or larger application-level messages are needed - then you want to use TCP. Yes, a small bit of discipline is required of your application. You need to properly serialize/deserialize your application-level messages through the TCP stream. This is easily done.
The "answer" is that application-level message boundaries must be assured by the application. UDP can serve for some applications. TCP is better for others (and a much larger set).
Also, if you have multiple threads doing unmanaged writes to a single stream (network or file) then that is a problem in your application.