Does TCP Receive Window Size header field include the bytes in segment headers? - sockets

I'm working on an implementation of TCP for a class and I'm wondering what the Window Size field actually mean.
I understand that the window size is the number of bytes, but does that number of bytes apply to:
the payload of the TCP Segment, not including the header or to
the entire TCP Segment, including the header?
Thus far, I've looked on Wikipedia:
RFC 793 states that:
The window indicates an allowed number of octets that the sender may
transmit before receiving further permission.
RFC 2581 states that:
receiver's advertised window (rwnd) is a receiver-side limit on the
amount of outstanding data
Neither of these make it particularly clear. Anyone?

It applies to the payload only. The sender can always transmit ACKs, FINs, RSTs, etc., with no payload.

Related

SwiftNIO: Sent package partially received

I have developed a client and a server using swift nio, I have no problems sending package of all size between 12 and 1000bytes since server sends a pack of 528bytes and when client got it, it is 512bytes. I'm trying to figure out why it happens. Does anyone knows if there is any chance to set a minimum ByteBuffer capacity? or if I'm missing something.
Thanks to all.
Assuming you're using TCP (that is, using ClientBootstrap), you cannot expect that the boundaries of messages sent by the server will be reflected in your reads. TCP is "stream-oriented": this means that the messages don't have boundaries at all, they behave just like a stream of data. In the NIO case, that means you would expect to see another read shortly after that contains more data.
The initial ByteBuffer capacity used for reads is controlled by the RecvByteBufferAllocator used by the Channel. This can be overridden:
ClientBootstrap(group: group)
.channelOption(ChannelOptions.recvAllocator,
AdaptiveRecvByteBufferAllocator(minimum: 1024, initial: 1024, maximum: 65536))
The standard defaults for the AdaptiveRecvByteBufferAllocator in NIO 2.23.0 are a minimum size of 64 bytes, an initial size of 1024 bytes, and a maximum size of 65536 bytes. In general we don't recommend overriding these defaults unless you need to: for TCP NIO will ensure the buffer is appropriately sized for the reads we're seeing.

SDP Offer/Answer model with DTMF rtpmap/fmtp mismatch

Imagine an offer SDP that has one line of "m" with codecs 8 and 101 for DMTF and marked as sendrecv:
m = audio 35904 RTP/AVP 8 101
a = rtpmap:8 PCMA/8000
a = rtpmap:101 telephone-event/8000
a = fmtp:101 0-15
a = sendrecv
The offered SDP is answered by a SDP with one line of "m" containing codecs 8 and 120 for DTMF similarly marked as sendrecv:
m = audio 1235 RTP/AVP 8 120
a = rtpmap:8 PCMA/8000
a = rtpmap:120 telephone-event/8000
a = fmtp:101 0-15
a = sendrecv
From RFC 3264:
For streams marked as sendrecv in the answer, the "m=" line MUST
contain at least one codec the answerer is willing to both send and
receive, from amongst those listed in the offer. The stream MAY
indicate additional media formats, not listed in the corresponding
stream in the offer, that the answerer is willing to send or
receive (of course, it will not be able to send them at this time,
since it was not listed in the offer).
Above part of the RFC3264, proves that sending a different DTMF fmtp(120 to 101) in answer SDP complies with RFC3264 since the codec 8(G711a) matches with the offer SDP.
Is it okay to say the codec exchange is completed successfully and DTMF exchange will okay or is DTMF is not expected to work at this point?
In general:
RTP payload type numbers 0-95 identify a static media encoding. E.g. payload type 8 means PCMA audio with a clock rate of 8000 Hz (RFC3551). As such, this description doesn't have to (but should) be included in the media format description of the SDP offer/answer, using the "a=rtpmap:" and "a=fmtp:" attributes (RFC4566).
Payload type numbers 96-127 are dynamic. These can be used to negotiate encodings that aren't included in the static list. When using one of these numbers, an encoding specification has to be included in the media format description to specify the exact encoding parameters.
Both negotiating parties can choose their own dynamic payload type number to represent the same media encoding, this doesn't have to be the same number. This can be useful when a party already assigned a particular dynamic payload type number to another encoding. In your example one party uses 101 in the m-line and the other one uses 120, but these numbers represent the same media encoding (see "a=rtpmap:" lines). Each party tells the other 'when you send RTP using encoding X you must include payload type number Y in the RTP packet headers.
The payload type number is included in the PT field of RTP packets headers (RFC 3550)
In this case:
The "a=fmtp:" attribute in the answer specifies 101 as payload type number instead of 120. That means it doesn't apply to the telephone-events payload and no information is available as to which DTMF events are supported (RFC 4733). I think this is an implementation error and the fmtp attribute is meant to apply to the telephone-events payload.
It is an indication that you should expect DTMF issues. But it could also all work fine. Give it a try...

Packet Size ,Window Size and Socket Buffer In TCP

After studying the "window size" concept, what I understood is that it keeps packet before sending over wire and till acknowledgement come for earliest packet . Once this gets filled up, subsequent packet will be dropped. Somewhere I also have read that TCP is a streaming protocol, and packet is what related to IP protocol at Network layer .
What I assumed till was that I have declared a Buffer (inside code) which I fill with some data and send this Buffer using socket. I declared a buffer of 10000 bytes and send it repeatedly using socket over 10 Gbps link .
I have following assumptions and questions. Please verify and help
If I want to send a packet of 64,256,512 etc. bytes, declared buffer inside code of that much space and send over socket. Each execution of send() command will send one packet of that much size .
So if I want to study the packet size variation effect on throughput, what do I have to do? Do I need to vary buffer size in code?
What are the socket buffer which we set using SO_SNDBUF and SO_RECVBUF? Google says it's buffer space for socket. Is it same as TCP window size or something different? Which parameter is more suitable to vary or to increase throughput?
Also there are three parameter in socket buffer: Min, Default and Max. Which one should I vary to my experiment and to get more relevance?
If I want to send a packet of 64,256,512 etc. bytes , Declared buffer inside code of that much space and send over socket .Each execution of send() command will send one packet of that much size.
Only if you disable the Nagle algorithm and the size is less than the path MTU. You mustn't rely on this.
So if I want to Study the Packet size variation effect on throughput, What I have to do , vary buffer space in Code?
No. Vary SO_RCVBUF at the receiver. This is the single biggest determinant of throughput, as it determines the maximum receive window.
what are the socket buffer which we set using SO_SNDBUF and SO_RCVBUF
Send buffer size at the sender, and receive buffer size at the receiver. In the kernel.
It's Same as TCP Window size
See above.
or else different ? Which parameter is more suitable to vary to increase throughput ?
See above.
Also there are three parameter in Socket Buffer min Default and Max . Which one should I vary for My experiment to get more relevance
None of them. These are the system-wide parameters. Just play with SO_SNDBUF and SO_RCVBUF for the specific sockets in your application.
TCP does not directly expose a way to control the way packets are sent since it is a stream protocol. But you can make the TCP stack send packets by disabling the Nagle algorithm. That way all data that you send will be sent out immediately instead of being buffered. Data will be split into packets of MTU size which is like ~1400 bytes. Depends on the link.
To answer (2): Disable nagling and invoke send with buffers of < 1400 bytes. Use Wireshark to make sure you got what you wanted.
The buffer settings have nothing to do with any of this. I know of no valid reason to touch them.
In general this question is probably moot since you seem to want to send a lot of data. Just leave Nagling enabled and send big buffers (such as 64KB).
I do some experience on Windows 10:
code from https://docs.python.org/3/library/socketserver.html#asynchronous-mixins,
RawCap for loopback capture,
WireShark for watching result.
The primary client code is:
def client(ip, port, message):
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.setsockopt(socket.SOL_SOCKET,socket.SO_RCVBUF, 100000)
sock.connect((ip, port))
sock.sendall(bytes(message, 'ascii'))
response = str(sock.recv(1024), 'ascii')
print("Received: {}".format(response))
Here is the result(the server port is 11111):
you can see, the tcp recive window size is the same as SO_RCVBUF, may it is platform indepent, you can verify it on other platform.
on https://msdn.microsoft.com/en-us/library/windows/hardware/ff570832(v=vs.85).aspx
The SO_RCVBUF socket option determines the size of a socket's receive buffer that is used by the underlying transport.
verified this.
Also, when I set SO_SNDBUF = 100000, it have no affects on the tcp transmission between client and server, as server just can discard data if client send much data one time.
So, if you want to change SO_RCVBUF to max Throughput, you can refer http://packetbomb.com/understanding-throughput-and-tcp-windows/, the os may offer func to detect ideal send backlog (ISB).

Why skb_buffer needs to be skipped by 20 bytes to read the transport buffer while the packet is input?

I am writing a network module in Linux, and I see that the tcp header can be extracted only after skipping 20 bytes from the skb buffer, even though the API is 'skb_transport_header'.
What is the reason behind it? Can some body please explain in detail? The same is not required for outgoing packets. I understand that while receiving the packets, the headers are removed as the packet flows from L1 to L5. However, when the packet is outgoing, the headers are added. How this makes a difference here?
/**For Input packet **/
struct tcphdr *tcp;
tcp = (struct tcphdr *)(skb_transport_header(skb)+20);
/** For Outgoing packet **/
struct tcphdr *tcp;
tcp = (struct tcphdr *)(skb_transport_header(skb));
It depends on where in the stack you process the packet. Just after receipt of the packet, the transport header offset won't yet have been set. Once you've gotten to the point where it's been determined that this packet is in fact destined to the local box, that should no longer be necessary. This happens for IPv4 in ip_local_deliver_finish(). (Note that tcp_hdr(), for example, assumes that the transport_header location is already set.)
This makes total sense (even though it can be hard to determine where things like this happen in the normal flow): As each layer is recognized and processed, the starting offset of the next layer is recorded in the sk_buff. The headers aren't actually removed, the skb "data" location is just adjusted to point beyond them. And the layer-specific location is similarly adjusted.
On output, it's a little more straightforward and is done in the opposite order: transport header will be created first. Then, the network header is prepended to that, etc.

How can I defense from attackers who send junk data packet?

I wrote a TCP socket program,and define a text protocol format like: "length|content",
to make it simple, the "length" is always 1-byte-long and it define the number of bytes of "content"
My problem is:
when attackers send packets like "1|a51",it will stay in tcp's receive buffer
the program will parse it wrong and the next packet would start like "5|1XXXX",
then the rest of the packets remain in the buffer would all parsed wrong,
how to solve this problem?
If you get garbage, just close the connection. It's not your problem to figure out what they meant, if anything.
instead of length|content only, you also need to provide a checksum, if the checksum is not correct, you should drop the connection to avoid partial receive.
this is a typical problem in tcp protocol, since the tcp is stream based. but just as http, which is an application of tcp protocol, it has a structure of request / response to make sure each end of the connection knows when the data has been fully transferred.
but your scenario is a little bit tricky, since the hacker can only affect the connection of his own. while it cannot change the data from other connections, only if he can control the route / switcher between your application and the users.