How does a socket application "know" it has received a complete data transmission message? - sockets

From Python documentation on socket programming. In what I have written below, have I correctly understood how a receiving application "knows" it has received a complete message from a sender?
The TCP layer just sends a stream of bytes from the sender to the receiver.
The sender has the responsibility of indicating how long the sent message is. There are three main methods :
• Every message sent is of a fixed length.
• Every message is of variable length and has a header that contains the length of the message sent.
• Every message is of variable length and has a "marker" that indicates the end of the message.
The receiver "looks" in the memory area, the buffer, where TCP is putting the stream of received bytes and the receiver only processes the message when it determines the end of the message has been reached.
This means the receiving application has to loop, continually looking at the buffer until it determines the full message has been received.

Related

I can not sent short messages by TCP protocol

I have a trouble to tune TCP client-server communication.
My current project has a client, running on PC (C#) and a server,
running on embedded Linux 4.1.22-ltsi.
Them use UDP communication to exchanging data.
The client and server work in blocking mode and
send short messages one to 2nd
(16, 60, 200 bytes etc.) that include either command or set of parameters.
The messages do note include any header with message length because
UDP is message oriented protocol. Its recvfrom() API returns number of received bytes.
For my server's program structure is important to get and process entire alone message.
The problem is raised when I try to implement TCP communication type instead of UDP.
The server's receive buffer (recv() TCP API) is 2048 bytes:
#define UDP_RX_BUF_SIZE 2048
numbytes = recv(fd_connect, rx_buffer, UDP_RX_BUF_SIZE, MSG_WAITALL/*BLOCKING_MODE*/);
So, the recv() API returns from waiting when rx_buffer is full, i.e after it receives
2048 bytes. It breaks all program approach. In other words, when client send 16 bytes command
to server and waits an answer from it, server's recv() keeps the message
"in stomach", until it will receive 2048 bytes.
I tried to fix it as below, without success:
On client side (C#) I set the socket parameter theSocket.NoDelay.
When I checked this on the sniffer I saw that client sends messages "as I want",
with requested length.
On server side I set TCP_NODELAY socket option to 1
int optval= 1;
setsockopt(fd,IPPROTO_TCP, TCP_NODELAY, &optval, sizeof(optval);
On server side (Linux) I checked socket options SO_SNDLOWAT/SO_RCVLOWAT and they are 1 byte each one.
Please see the attached sniffer's log picture. 10.0.0.10 is a client. 10.0.0.106 is a server. It is seen, that client activates PSH flag (push), informing the server side to move the incoming data to application immediately and do not fill a buffer.
Additional question: what is SSH encrypted packets that runs between the sides. I suppose that it is my Eclipse debugger on PC (running server application through the same Ethernet connection) sends them. Am I right?
So, my problem is how to cause `recv() API to return each short message (16, 60, 200 bytes etc.) instead of accumulating them until receiving buffer fills.
TCP is connection oriented and it also maintains the order in which packets are sent and received.
Having said that, in TCP client, you will receive the stream of bytes and not the individual udp message as in UDP. So you will need to send the packet length and marker as the initial bytes.
So client can first find the packet length and then read data till packet length is reached and then expect new packet length.
You can also check for library like netty, zmq to do this extra work

Using "send" to tcp socket/Windows/c

For c send function(blocking way) it's specified what function returns with size of sent bytes when it's received on destinations. I'm not sure that I understand all nuances, also after writing "demo" app with WSAIoctl and WSARecv on server side.
When send returns with less bytes number than asked in buffer-length parameter?
What is considered as "received on destinations"? My first guess it's when it sit on server's OS buffer and server application is notified. My second one it's when server application recv call have read it fully?
Unless you are using a (somewhat exotic) library, a send on a socket will return the number of bytes passed to the TCP buffer successfully, not the number of bytes received by the peer (see Microsoft´s docs for example).
When you are streaming data via a socket, you need to check the bytes effectively accepted into the TCP send buffer. That´s why usually a send command is inside a loop that will issue several sends if needed.
Errors in send are local: for example if the socket is closed by the peer during a sending operation (making your socket invalid) or if the operation times out (TCP buffer not emptying, i. e. peer not receiving data fast enough or some other trouble).
After all send is completed you have no easy way of knowing if the peer received all the bytes you sent. You´ll usually just issue closesocket and make sure that your socket has a proper linger option set (i. e. only close after timeout or sucessfully finishing the send). Alternatively you wait for a confirmation by the peer (for example via a recv that returns zero bytes, indicating that the connection was gracefully closed).
Edit: typo

client does not receive all messages if server sends messages too quickly with pickle python

My client side cannot recv the two messages if the sender sends too quickly.
sender.py
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
sock.bind(('', int(port)))
sock.listen(1)
conn, addr = sock.accept()
#conn.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
# sends message 1 and message 2
conn.send(pickle.dumps(message1))
#time.sleep(1)
conn.send(pickle.dumps(message2))
Where both message 1 and message 2 are pickled objects.
client.py
sock = socket.socket(socket.AF_INET,socket.SOCK_STREAM)
sock.connect((ip,int(port)))
message1 = pickle.loads(sock.recv(1024))
print(message1)
message2 = pickle.loads(sock.recv(1024))
When i run this code as it is, i am able to print out message1 but i am unable to receive message2 from the sender. The socket blocks at message2.
Also, if i uncomment time.sleep(1) in my sender side code, i am able to receive both messages just fine. Not sure what the problem is. I tried to flush my TCP buffer everytime by setting TCP_NODELAY but that didnt work. Not sure what is actually happening ? How would i ensure that i receive the two messages
Your code assumes that each send on the server side will match a recv on the client side. But, TCP is byte stream and not a message based protocol. This means that it is likely that your first recv will already contain data from the second send which might be simply discarded by pickle.loads as junk after the pickled data. The second recv will only receive the remaining data (or just block since all data where already received) so pickle.loads will fail.
The common way to deal with this situation is to construct a message protocol on top of the TCP byte stream. This can for example be done by prefixing each message with a fixed-length size (for example as 4 byte uint using struct.pack('L',...)) when sending and for reading first read the fixed-length size value and then read the message with the given size.

How Can I Manipulate Some/IP Message Content On Run Time?

I was trying to manipulate SOME/IP messages by falsifying their content(Payload) sent between 2 ECUs at run time.
After setting up the Hardware VN6510A MAC Bypassing and integrating it in the data traffic path between those 2 ECUs to monitor and control all Ethernet data streams.
ECU A ---> eth1 interface --VN6510A-- eth2 interface ---> ECU B
I successfully catch our target SOME/IP messages and I also succefully manipulate their paylod.
But at the end we got 2 SOME/IP messages: the real coming message and the falsified message forwarded at the same time.
How could we bound those 2 SOME/IP messages, the real message and the falsified message together, so that we could have just one falsified SOME/IP message, knowing that I am using the same SOME/IP message handle.
I used the callback function void OnEthPacket(LONG channel, LONG dir, LONG packet) to register a received Ethernet packet.
Probably by setting your VN.... to "Direct" and not "MAC Bypassing"
Well we could not manipulate Messages at run time using the vector box VN6510A Solution because simply their box doesn't support this feature.

recvfrom() only gets up to 2048 bytes from UDP socket

I have to call the function repeatedly to get all data, given that the len argument is set to 10240. But this results in blocking at last. How can I get all the data and safely return in a platform independent way?
BTW, I use netcat at the sender side:
cat ocr_pi.png | nc -u server 5555
Is this issue relative to nc's behavior? I didn't find any parameter to set UDP packet size(-O is for TCP).
Thanks.
UDP sends and receives data as messages. In the len argument, you tell recvfrom() the max message size you can receive, and then recvfrom() waits until a full message arrives, regardless of its size. UDP messages are self-contained. Unlike TCP, a UDP message cannot be partially sent/received. It is an all-or-nothing thing. If the size of the received message is greater than the len value you specify, the message is discarded and you get an error.
The only way recvfrom() blocks is if there is no message available to read. If you don't want to block, use select() (or pselect() or epoll or other platform equivalent) to specify a timeout to wait for a message to arrive, and then call recvfrom() only if there is actually something to read.