I can not sent short messages by TCP protocol - sockets

I have a trouble to tune TCP client-server communication.
My current project has a client, running on PC (C#) and a server,
running on embedded Linux 4.1.22-ltsi.
Them use UDP communication to exchanging data.
The client and server work in blocking mode and
send short messages one to 2nd
(16, 60, 200 bytes etc.) that include either command or set of parameters.
The messages do note include any header with message length because
UDP is message oriented protocol. Its recvfrom() API returns number of received bytes.
For my server's program structure is important to get and process entire alone message.
The problem is raised when I try to implement TCP communication type instead of UDP.
The server's receive buffer (recv() TCP API) is 2048 bytes:
#define UDP_RX_BUF_SIZE 2048
numbytes = recv(fd_connect, rx_buffer, UDP_RX_BUF_SIZE, MSG_WAITALL/*BLOCKING_MODE*/);
So, the recv() API returns from waiting when rx_buffer is full, i.e after it receives
2048 bytes. It breaks all program approach. In other words, when client send 16 bytes command
to server and waits an answer from it, server's recv() keeps the message
"in stomach", until it will receive 2048 bytes.
I tried to fix it as below, without success:
On client side (C#) I set the socket parameter theSocket.NoDelay.
When I checked this on the sniffer I saw that client sends messages "as I want",
with requested length.
On server side I set TCP_NODELAY socket option to 1
int optval= 1;
setsockopt(fd,IPPROTO_TCP, TCP_NODELAY, &optval, sizeof(optval);
On server side (Linux) I checked socket options SO_SNDLOWAT/SO_RCVLOWAT and they are 1 byte each one.
Please see the attached sniffer's log picture. 10.0.0.10 is a client. 10.0.0.106 is a server. It is seen, that client activates PSH flag (push), informing the server side to move the incoming data to application immediately and do not fill a buffer.
Additional question: what is SSH encrypted packets that runs between the sides. I suppose that it is my Eclipse debugger on PC (running server application through the same Ethernet connection) sends them. Am I right?
So, my problem is how to cause `recv() API to return each short message (16, 60, 200 bytes etc.) instead of accumulating them until receiving buffer fills.

TCP is connection oriented and it also maintains the order in which packets are sent and received.
Having said that, in TCP client, you will receive the stream of bytes and not the individual udp message as in UDP. So you will need to send the packet length and marker as the initial bytes.
So client can first find the packet length and then read data till packet length is reached and then expect new packet length.
You can also check for library like netty, zmq to do this extra work

Related

Using "send" to tcp socket/Windows/c

For c send function(blocking way) it's specified what function returns with size of sent bytes when it's received on destinations. I'm not sure that I understand all nuances, also after writing "demo" app with WSAIoctl and WSARecv on server side.
When send returns with less bytes number than asked in buffer-length parameter?
What is considered as "received on destinations"? My first guess it's when it sit on server's OS buffer and server application is notified. My second one it's when server application recv call have read it fully?
Unless you are using a (somewhat exotic) library, a send on a socket will return the number of bytes passed to the TCP buffer successfully, not the number of bytes received by the peer (see Microsoft´s docs for example).
When you are streaming data via a socket, you need to check the bytes effectively accepted into the TCP send buffer. That´s why usually a send command is inside a loop that will issue several sends if needed.
Errors in send are local: for example if the socket is closed by the peer during a sending operation (making your socket invalid) or if the operation times out (TCP buffer not emptying, i. e. peer not receiving data fast enough or some other trouble).
After all send is completed you have no easy way of knowing if the peer received all the bytes you sent. You´ll usually just issue closesocket and make sure that your socket has a proper linger option set (i. e. only close after timeout or sucessfully finishing the send). Alternatively you wait for a confirmation by the peer (for example via a recv that returns zero bytes, indicating that the connection was gracefully closed).
Edit: typo

TCP/IP using Ada Sockets: How to correctly finish a packet? [duplicate]

This question already has answers here:
TCP Connection Seems to Receive Incomplete Data
(5 answers)
Closed 3 years ago.
I'm attempting to implement the Remote Frame Buffer protocol using Ada's Sockets library and I'm having trouble controlling the length of the packets that I'm sending.
I'm following the RFC 6143 specification (https://tools.ietf.org/pdf/rfc6143.pdf), see comments in the code for section numbers...
-- Section 7.1.1
String'Write (Comms, Protocol_Version);
Put_Line ("Server version: '"
& Protocol_Version (1 .. 11) & "'");
String'Read (Comms, Client_Version);
Put_Line ("Client version: '"
& Client_Version (1 .. 11) & "'");
-- Section 7.1.2
-- Server sends security types
U8'Write (Comms, Number_Of_Security_Types);
U8'Write (Comms, Security_Type_None);
-- client replies by selecting a security type
U8'Read (Comms, Client_Requested_Security_Type);
Put_Line ("Client requested security type: "
& Client_Requested_Security_Type'Image);
-- Section 7.1.3
U32'Write (Comms, Byte_Reverse (Security_Result));
-- Section 7.3.1
U8'Read (Comms, Client_Requested_Shared_Flag);
Put_Line ("Client requested shared flag: "
& Client_Requested_Shared_Flag'Image);
Server_Init'Write (Comms, Server_Init_Rec);
The problem seems to be (according to wireshark) that my calls to the various 'Write procedures are causing bytes to queue up on the socket without getting sent.
Consequently two or more packet's worth of data are being sent as one and causing malformed packets. Sections 7.1.2 and 7.1.3 are being sent consecutively in one packet instead of being broken into two.
I had wrongly assumed that 'Reading from the socket would cause the outgoing data to be flushed out, but that does not appear to be the case.
How do I tell Ada's Sockets library "this packet is finished, send it right now"?
To enphasize https://stackoverflow.com/users/207421/user207421 comment:
I'm not a protocols guru, but from my own experience, the usage of TCP (see RFC793) is often misunderstood.
The problem seems to be (according to wireshark) that my calls to the various 'Write procedures are causing bytes to queue up on the socket without getting sent.
Consequently two or more packet's worth of data are being sent as one and causing malformed packets. Sections 7.1.2 and 7.1.3 are being sent consecutively in one packet instead of being broken into two.
In short, TCP is not message-oriented.
Using TCP, sending/writing to socket results only append data to the TCP stream. The socket is free to send it in one exchange or several, and if you have lengthy data to send and message oriented protocol to implement on top of TCP, you may need to handle message reconstruction. Usually, an end of message special sequence of characters is added at the end of the message.
Processes transmit data by calling on the TCP and passing buffers of data as arguments. The TCP packages the data from these buffers into segments and calls on the internet module to transmit each segment to the destination TCP. The receiving TCP places the data from a segment into the receiving user's buffer and notifies the receiving user. The TCPs include control information in the segments which they use to ensure reliable ordered data transmission.
See also https://stackoverflow.com/a/11237634/7237062, quoting:
TCP is a stream-oriented connection, not message-oriented. It has no
concept of a message. When you write out your serialized string, it
only sees a meaningless sequence of bytes. TCP is free to break up
that stream up into multiple fragments and they will be received at
the client in those fragment-sized chunks. It is up to you to
reconstruct the entire message on the other end.
In your scenario, one would typically send a message length prefix.
This way, the client first reads the length prefix so it can then know
how large the incoming message is supposed to be.
or TCP Connection Seems to Receive Incomplete Data, quoting:
The recv function can receive as little as 1 byte, you may have to call it multiple times to get your entire payload. Because of this, you need to know how much data you're expecting. Although you can signal completion by closing the connection, that's not really a good idea.
Update:
I should also mention that the send function has the same conventions as recv: you have to call it in a loop because you cannot assume that it will send all your data. While it might always work in your development environment, that's the kind of assumption that will bite you later.

Packet Size ,Window Size and Socket Buffer In TCP

After studying the "window size" concept, what I understood is that it keeps packet before sending over wire and till acknowledgement come for earliest packet . Once this gets filled up, subsequent packet will be dropped. Somewhere I also have read that TCP is a streaming protocol, and packet is what related to IP protocol at Network layer .
What I assumed till was that I have declared a Buffer (inside code) which I fill with some data and send this Buffer using socket. I declared a buffer of 10000 bytes and send it repeatedly using socket over 10 Gbps link .
I have following assumptions and questions. Please verify and help
If I want to send a packet of 64,256,512 etc. bytes, declared buffer inside code of that much space and send over socket. Each execution of send() command will send one packet of that much size .
So if I want to study the packet size variation effect on throughput, what do I have to do? Do I need to vary buffer size in code?
What are the socket buffer which we set using SO_SNDBUF and SO_RECVBUF? Google says it's buffer space for socket. Is it same as TCP window size or something different? Which parameter is more suitable to vary or to increase throughput?
Also there are three parameter in socket buffer: Min, Default and Max. Which one should I vary to my experiment and to get more relevance?
If I want to send a packet of 64,256,512 etc. bytes , Declared buffer inside code of that much space and send over socket .Each execution of send() command will send one packet of that much size.
Only if you disable the Nagle algorithm and the size is less than the path MTU. You mustn't rely on this.
So if I want to Study the Packet size variation effect on throughput, What I have to do , vary buffer space in Code?
No. Vary SO_RCVBUF at the receiver. This is the single biggest determinant of throughput, as it determines the maximum receive window.
what are the socket buffer which we set using SO_SNDBUF and SO_RCVBUF
Send buffer size at the sender, and receive buffer size at the receiver. In the kernel.
It's Same as TCP Window size
See above.
or else different ? Which parameter is more suitable to vary to increase throughput ?
See above.
Also there are three parameter in Socket Buffer min Default and Max . Which one should I vary for My experiment to get more relevance
None of them. These are the system-wide parameters. Just play with SO_SNDBUF and SO_RCVBUF for the specific sockets in your application.
TCP does not directly expose a way to control the way packets are sent since it is a stream protocol. But you can make the TCP stack send packets by disabling the Nagle algorithm. That way all data that you send will be sent out immediately instead of being buffered. Data will be split into packets of MTU size which is like ~1400 bytes. Depends on the link.
To answer (2): Disable nagling and invoke send with buffers of < 1400 bytes. Use Wireshark to make sure you got what you wanted.
The buffer settings have nothing to do with any of this. I know of no valid reason to touch them.
In general this question is probably moot since you seem to want to send a lot of data. Just leave Nagling enabled and send big buffers (such as 64KB).
I do some experience on Windows 10:
code from https://docs.python.org/3/library/socketserver.html#asynchronous-mixins,
RawCap for loopback capture,
WireShark for watching result.
The primary client code is:
def client(ip, port, message):
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.setsockopt(socket.SOL_SOCKET,socket.SO_RCVBUF, 100000)
sock.connect((ip, port))
sock.sendall(bytes(message, 'ascii'))
response = str(sock.recv(1024), 'ascii')
print("Received: {}".format(response))
Here is the result(the server port is 11111):
you can see, the tcp recive window size is the same as SO_RCVBUF, may it is platform indepent, you can verify it on other platform.
on https://msdn.microsoft.com/en-us/library/windows/hardware/ff570832(v=vs.85).aspx
The SO_RCVBUF socket option determines the size of a socket's receive buffer that is used by the underlying transport.
verified this.
Also, when I set SO_SNDBUF = 100000, it have no affects on the tcp transmission between client and server, as server just can discard data if client send much data one time.
So, if you want to change SO_RCVBUF to max Throughput, you can refer http://packetbomb.com/understanding-throughput-and-tcp-windows/, the os may offer func to detect ideal send backlog (ISB).

recvfrom() only gets up to 2048 bytes from UDP socket

I have to call the function repeatedly to get all data, given that the len argument is set to 10240. But this results in blocking at last. How can I get all the data and safely return in a platform independent way?
BTW, I use netcat at the sender side:
cat ocr_pi.png | nc -u server 5555
Is this issue relative to nc's behavior? I didn't find any parameter to set UDP packet size(-O is for TCP).
Thanks.
UDP sends and receives data as messages. In the len argument, you tell recvfrom() the max message size you can receive, and then recvfrom() waits until a full message arrives, regardless of its size. UDP messages are self-contained. Unlike TCP, a UDP message cannot be partially sent/received. It is an all-or-nothing thing. If the size of the received message is greater than the len value you specify, the message is discarded and you get an error.
The only way recvfrom() blocks is if there is no message available to read. If you don't want to block, use select() (or pselect() or epoll or other platform equivalent) to specify a timeout to wait for a message to arrive, and then call recvfrom() only if there is actually something to read.

Reproduce write-write-read delay with Java sockets

I have read that the combination of three things causes something like a 200ms delay with TCP: Nagle's algorithm, delayed acknowledgement, and the "write-write-read" combination. However, I cannot reproduce this delay with Java sockets and I am therefore not sure if I have understood correctly.
I am running a test on Windows 7 with Java 7 with two threads using sockets over the loopback address. I have not touched the tcpNoDelay option on any socket (false by default) nor played around with any TCP settings on the OS. The main piece of the code in the client is as below. The server is responding with a byte after each two bytes it receives from the client.
for (int i = 0; i < 100; i++) {
client.getOutputStream().write(1);
client.getOutputStream().write(2);
System.out.println(client.getInputStream().read());
}
I do not see any delay. Why not?
I believe you see delay acknowledgment.
You write 4 and 4 bytes to the socket. The server's TCP stack receives a segment (that probably contains at least 4 bytes from an int number) and wakes up the server application thread. This thread writes a byte back to the stream and this byte is sent to the client within ACK segment. I.e. TCP stack gives a chance to an application to send a reply immediately. So you see no delay.
You can write a dump of traffic and also make an experiment between two computers to see what really happens.