For c send function(blocking way) it's specified what function returns with size of sent bytes when it's received on destinations. I'm not sure that I understand all nuances, also after writing "demo" app with WSAIoctl and WSARecv on server side.
When send returns with less bytes number than asked in buffer-length parameter?
What is considered as "received on destinations"? My first guess it's when it sit on server's OS buffer and server application is notified. My second one it's when server application recv call have read it fully?
Unless you are using a (somewhat exotic) library, a send on a socket will return the number of bytes passed to the TCP buffer successfully, not the number of bytes received by the peer (see Microsoft´s docs for example).
When you are streaming data via a socket, you need to check the bytes effectively accepted into the TCP send buffer. That´s why usually a send command is inside a loop that will issue several sends if needed.
Errors in send are local: for example if the socket is closed by the peer during a sending operation (making your socket invalid) or if the operation times out (TCP buffer not emptying, i. e. peer not receiving data fast enough or some other trouble).
After all send is completed you have no easy way of knowing if the peer received all the bytes you sent. You´ll usually just issue closesocket and make sure that your socket has a proper linger option set (i. e. only close after timeout or sucessfully finishing the send). Alternatively you wait for a confirmation by the peer (for example via a recv that returns zero bytes, indicating that the connection was gracefully closed).
Edit: typo
Related
I'm writing simple http server.
I want to shutdown socket after server send all data.
I considered that compare return byte of write() to socket with actuall content length, but I had read that the return value just means that data moved to send-buffer of the socket. (Im not sure and I don't know how can I check it)
If so, can I shutdown the socket just after check the bytes are same? What if the datas sended need to be retransmitted at TCP level after server send FIN flag?
The OS does not discard data you have written when you call shutdown(SHUT_WR). If the other end already shut down its end (you can tell because you received 0 bytes) then you should be able to close the socket, and the OS will keep it open until it has finished sending everything.
The FIN is treated like part of the data. It has to be retransmitted if the other end doesn't receive it, and it doesn't get processed until everything before it has been received. This is called "graceful shutdown" or "graceful close". This is unlike RST, which signals that the connection should be aborted immediately.
I have a trouble to tune TCP client-server communication.
My current project has a client, running on PC (C#) and a server,
running on embedded Linux 4.1.22-ltsi.
Them use UDP communication to exchanging data.
The client and server work in blocking mode and
send short messages one to 2nd
(16, 60, 200 bytes etc.) that include either command or set of parameters.
The messages do note include any header with message length because
UDP is message oriented protocol. Its recvfrom() API returns number of received bytes.
For my server's program structure is important to get and process entire alone message.
The problem is raised when I try to implement TCP communication type instead of UDP.
The server's receive buffer (recv() TCP API) is 2048 bytes:
#define UDP_RX_BUF_SIZE 2048
numbytes = recv(fd_connect, rx_buffer, UDP_RX_BUF_SIZE, MSG_WAITALL/*BLOCKING_MODE*/);
So, the recv() API returns from waiting when rx_buffer is full, i.e after it receives
2048 bytes. It breaks all program approach. In other words, when client send 16 bytes command
to server and waits an answer from it, server's recv() keeps the message
"in stomach", until it will receive 2048 bytes.
I tried to fix it as below, without success:
On client side (C#) I set the socket parameter theSocket.NoDelay.
When I checked this on the sniffer I saw that client sends messages "as I want",
with requested length.
On server side I set TCP_NODELAY socket option to 1
int optval= 1;
setsockopt(fd,IPPROTO_TCP, TCP_NODELAY, &optval, sizeof(optval);
On server side (Linux) I checked socket options SO_SNDLOWAT/SO_RCVLOWAT and they are 1 byte each one.
Please see the attached sniffer's log picture. 10.0.0.10 is a client. 10.0.0.106 is a server. It is seen, that client activates PSH flag (push), informing the server side to move the incoming data to application immediately and do not fill a buffer.
Additional question: what is SSH encrypted packets that runs between the sides. I suppose that it is my Eclipse debugger on PC (running server application through the same Ethernet connection) sends them. Am I right?
So, my problem is how to cause `recv() API to return each short message (16, 60, 200 bytes etc.) instead of accumulating them until receiving buffer fills.
TCP is connection oriented and it also maintains the order in which packets are sent and received.
Having said that, in TCP client, you will receive the stream of bytes and not the individual udp message as in UDP. So you will need to send the packet length and marker as the initial bytes.
So client can first find the packet length and then read data till packet length is reached and then expect new packet length.
You can also check for library like netty, zmq to do this extra work
This question already has answers here:
TCP Connection Seems to Receive Incomplete Data
(5 answers)
Closed 3 years ago.
I'm attempting to implement the Remote Frame Buffer protocol using Ada's Sockets library and I'm having trouble controlling the length of the packets that I'm sending.
I'm following the RFC 6143 specification (https://tools.ietf.org/pdf/rfc6143.pdf), see comments in the code for section numbers...
-- Section 7.1.1
String'Write (Comms, Protocol_Version);
Put_Line ("Server version: '"
& Protocol_Version (1 .. 11) & "'");
String'Read (Comms, Client_Version);
Put_Line ("Client version: '"
& Client_Version (1 .. 11) & "'");
-- Section 7.1.2
-- Server sends security types
U8'Write (Comms, Number_Of_Security_Types);
U8'Write (Comms, Security_Type_None);
-- client replies by selecting a security type
U8'Read (Comms, Client_Requested_Security_Type);
Put_Line ("Client requested security type: "
& Client_Requested_Security_Type'Image);
-- Section 7.1.3
U32'Write (Comms, Byte_Reverse (Security_Result));
-- Section 7.3.1
U8'Read (Comms, Client_Requested_Shared_Flag);
Put_Line ("Client requested shared flag: "
& Client_Requested_Shared_Flag'Image);
Server_Init'Write (Comms, Server_Init_Rec);
The problem seems to be (according to wireshark) that my calls to the various 'Write procedures are causing bytes to queue up on the socket without getting sent.
Consequently two or more packet's worth of data are being sent as one and causing malformed packets. Sections 7.1.2 and 7.1.3 are being sent consecutively in one packet instead of being broken into two.
I had wrongly assumed that 'Reading from the socket would cause the outgoing data to be flushed out, but that does not appear to be the case.
How do I tell Ada's Sockets library "this packet is finished, send it right now"?
To enphasize https://stackoverflow.com/users/207421/user207421 comment:
I'm not a protocols guru, but from my own experience, the usage of TCP (see RFC793) is often misunderstood.
The problem seems to be (according to wireshark) that my calls to the various 'Write procedures are causing bytes to queue up on the socket without getting sent.
Consequently two or more packet's worth of data are being sent as one and causing malformed packets. Sections 7.1.2 and 7.1.3 are being sent consecutively in one packet instead of being broken into two.
In short, TCP is not message-oriented.
Using TCP, sending/writing to socket results only append data to the TCP stream. The socket is free to send it in one exchange or several, and if you have lengthy data to send and message oriented protocol to implement on top of TCP, you may need to handle message reconstruction. Usually, an end of message special sequence of characters is added at the end of the message.
Processes transmit data by calling on the TCP and passing buffers of data as arguments. The TCP packages the data from these buffers into segments and calls on the internet module to transmit each segment to the destination TCP. The receiving TCP places the data from a segment into the receiving user's buffer and notifies the receiving user. The TCPs include control information in the segments which they use to ensure reliable ordered data transmission.
See also https://stackoverflow.com/a/11237634/7237062, quoting:
TCP is a stream-oriented connection, not message-oriented. It has no
concept of a message. When you write out your serialized string, it
only sees a meaningless sequence of bytes. TCP is free to break up
that stream up into multiple fragments and they will be received at
the client in those fragment-sized chunks. It is up to you to
reconstruct the entire message on the other end.
In your scenario, one would typically send a message length prefix.
This way, the client first reads the length prefix so it can then know
how large the incoming message is supposed to be.
or TCP Connection Seems to Receive Incomplete Data, quoting:
The recv function can receive as little as 1 byte, you may have to call it multiple times to get your entire payload. Because of this, you need to know how much data you're expecting. Although you can signal completion by closing the connection, that's not really a good idea.
Update:
I should also mention that the send function has the same conventions as recv: you have to call it in a loop because you cannot assume that it will send all your data. While it might always work in your development environment, that's the kind of assumption that will bite you later.
Is it possible to implement the equivalent of Socket.Poll in async/await paradigm (or BeginXXX/EndXXX async pattern)?
A method which would act like NetworkStream.ReadAsync or Socket.BeginReceive but:
leave the data in the socket buffer
complete after the specified interval of time if no data arrived (leaving the socket in connected state so that the polling operation can be retried)
I need to implement IMAP IDLE so that the client connects to the mail server and then goes into waiting state where it received data from the server. If the server does not send anything within 10 minutes, the code sends ping to the server (without reconnecting, the connection is never closed), and starts waiting for data again.
In my tests, leaving the data in the buffer seems to be possible if I tell Socket.BeginReceive method to read no more than 0 bytes, e.g.:
sock.BeginReceive(b, 0, 0, SocketFlags.None, null, null)
However, not sure if it indeed will work in all cases, maybe I'm missing something. For instance, if the remote server closes the connection, it may send a zero-byte packet and not sure if Socket.BeginReceive will act identically to Socket.Poll in this case or not.
And the main problem is how to stop socket.BeginReceive without closing the socket.
I am make a application base on lwip,the applcation just send data to the server;
When my app works for some times (about 5 hours),I found that the send thread hung in send() function,and after about 30min send() return 0,and my thread run agin;
In the server side ,have make a keepalive,its time is 5min,when my app hungs,5min later the server close the sockect,but my app have not get this,still hungs in send() until 30min get 0 return; why this happen?
1: the up speed is not enough to send data,it will hungs in send?
2: maybe the server have not read data on time,and it make send buff full and hungs?
how can i avoid these peoblems in my code ? I have try to set TCP_NODELAY,SO_SNDTIMEO and select before send,but also have this problem.
send() blocks when the receiver is too far behind the sender. recv() returns zero when the peer has closed the connection, which means you must close the socket and stop reading.