Labview - Check if TCP read buffer contains more data - sockets

I've got a TCP Server that processes messages of the following structure:
[ Msg. Size (2 Byte) | Msg. Payload (N Byte) ]
The process is as follows:
Read 2 bytes from the TCP connection to identify payload size N.
Read N payload bytes and do something with it.
Close TCP connection.
To reduce networking overhead I'd like to piggyback multiple messages.
[ Msg. Size #1 | Msg. Payload #1 ][ Msg. Size #2 | Msg. Payload #2 ] ...
Obviously the processing loop must not close the TCP connection if the TCP read buffer contains more data (is not empty).
Is there any way to reliably check if more data is available in a TCP read buffer from within Labview 2013?
I could call read() again and check if it times out. But I'd like to avoid this solution since it introduces unwanted latencies.
In the processing loop described above standard Labview TCP VIs are used (e.g. TCP Wait On Listener, TCP Read, TCP Write, TCP Close Connection).

The client should shutdown the sending side of the connection as soon as it does not wish to send any more queries. The server should keep reading from the connection. If it detects that the other side has shut down the sending side, it can close the connection as soon as it has sent the final reply.
There is no need to wait for the read to timeout. A half-closed connection should be detected as soon as all data is read.
If for some reason you cannot support half-closed connections, you need some way to indicate the final request in the data that the server received. You can do this with a a special "I'm done" message. There are other ways.
By the way, you should not use the term "packet" to refer to application-level messages. You should use the term "message" to refer to an application-level unit of data that represents a single request or response.

You can wire a zero for the timeout time. Then you do not introduce unwarranted latencies.

Related

Why does TCP not tell me how many bytes have been received?

When sending data on a TCP connection, the TCP stack makes sure the bytes arrive, and lost packets are resent as needed. To accomplish that, the other end sends ACK messages to acknowledge it has received the data.
As the TCP stack receives ACK messages, it knows that the other end has received some data and can stop retransmitting. While it doesn't know exactly how much data the other end has received (ACKs may get lost or delayed) it at least has a lower bound of data that has already been received on the other end.
To my knowledge, TCP implementations don't make that information available to high level callers.
Specifically I'm thinking of the POSIX socket APIs, as far as I know there's no way to tell how much data the other end has received. I only know how much I sent, but due to large buffer sizes it could take a lot of time for that data to be received.
Obviously, if I control the other end, I could occasionally report how much data has been received with explicit messages (either on the same tcp connection or via a separate channel), but that seems inelegant.
Why is that information not exposed to the user? It would be useful for things like progress bars.

Are TCP/IP Sockets Atomic?

It is my understanding that a write to a TCP/IP socket will be atomic if the amount of data written is small. By atomic, I mean that the receiver will receive all of the data or none of the data. However, it is not atomic, if the amount of the data written is large. Am I correct? and if so, what counts as large?
Thanks,
Bob
No. TCP is a byte-stream protocol. No messages, no datagram-like behaviour.
For UDP, that is true, because all data written by the app is sent out in one UDP datagram.
For TCP, that is not true, unless the application sends only 1 byte of data at a time. A write to a TCP socket will write all of the data to a buffer that is associated with that socket. TCP will then read data from that buffer in the background and send it to the receiver. How much data TCP actually sends in one TCP segment depends on variables of its flow control mechanisms, and other factors, including:
Receive Window published by the other node (receiver)
Amount of data sent in previous segments in flight that are not acknowledged yet
Slow start and congestion avoidance algorithm state
Negotiated maximum segment size (MSS)
In TCP, you can never assume what the application writes to a socket is actually received in one read by the receiver. Data in the socket's buffer can be sent to the receiver in one or many TCP segments. At any moment when data is made available, the receiver can perform a socket read and return with whatever data is actually available at that moment.
Of course, all sent data will eventually reach the receiver, if there is no failure in the middle preventing that, and if the receiver does not close the connection or stop reading before the data arrives.

how can I transfer large data over tcp socket

how can I transfer large data without splitting. Am using tcp socket. Its for a game. I cant use udp and there might be 1200 values in an array. Am sending array in json format. But the server receiving it like splitted.
Also is there any option to send http request like tcp? I need the response in order. Also it should be faster.
Thanks,
You can't.
HTTP may chunk it
TCP will segment it
IP will packetize it
routers will fragment it ...
and TCP will reassemble it all at the other end.
There isn't a problem here to solve.
You do not have much control over splitting packets/datagrams. The network decides about this.
In the case of IP, you have the DF (don't fragment) flag, but I doubt it will be of much help here. If you are communicating over Ethernet, then 1200 element array may not fit into an Ethernet frame (payload size is up to the MTU of 1500 octets).
Why does your application depend on the fact that the whole data must arrive in a single unit, and not in a single connection (comprised potentially of multiple units)?
how can I transfer large data without splitting.
I'm interpreting the above to be roughly equivalent to "how can I transfer my data across a TCP connection using as few TCP packets as possible". As others have noted, there is no way to guarantee that your data will be placed into a single TCP packet -- but you can do some things to make it more likely. Here are some things I would do:
Keep a single TCP connection open. (HTTP traditionally opens a separate TCP connection for each request, but for low-latency you can't afford to do that. Instead you need to open a single TCP connection, keep it open, and continue sending/receiving data on it for as long as necessary).
Reduce the amount of data you need to send. (i.e. are there things that you are sending that the receiving program already knows? If so, don't send them)
Reduce the number of bytes you need to send. (The easiest way to do this is to zlib-compress your message-data before you send it, and have the receiving program decompress the message after receiving it. This can give you a size-reduction of 50-90%, depending on the content of your data)
Turn off Nagle's algorithm on your TCP socket. That will reduce latency by 200mS and discourage the TCP stack from playing unnecessary games with your data.
Send each data packet with a single send() call (if that means manually copying all of the data items into a separate memory buffer before calling send(), then so be it).
Note that even after you do all of the above, the TCP layer will still sometimes spread your messages across multiple packets, etc -- that's just the way TCP works. And even if your local TCP stack never did that, the receiving computer's TCP stack would still sometimes merge the data from consecutive TCP packets together inside its receive buffer. So the receiving program is always going to "receive it like splitted" sometimes, because TCP is a stream-based protocol and does not maintain message boundaries. (If you want message boundaries, you'll have to do your own framing -- the easiest way is usually to send a fixed-size (e.g. 1, 2, or 4-byte) integer byte-count field before each message, so the receiver knows how many bytes it needs to read in before it has a full message to parse)
Consider the idea that the issue may be else where or that you may be sending too much unnecessary data. In example with PHP there is the isset() function. If you're creating an internet based turn based game you don't (need to send all 1,200 variables back and forth every single time. Just send what changed and when the other player receives that data only change the variables are are set.

Will TCP connection lose packets?

Say Server S have a successful TCP connection with Client C.
C is keep sending 256-byte-long packets to S.
Is it possible that one of packets only receive part of it, but the connection does not break (Can continue receive new packets correctly)?
I thought TCP protocol itself will guarantee not lose any bytes while connecting. But seems not?
P.S. I'm using Python's socketserver library.
The TCP protocol does guarantee delivery. Thus (assuming there are no bugs in your code and in the TCP stack), the scenario you describe is impossible.
Do bear in mind that TCP is stream- rather than packet-oriented. This means that you may need to call recv() multiple times to read the entire 256-byte packet.
As #NPE said, TCP is a stream oriented protocol, that means that there is no guarantee on how much data bytes are sent in each TCP packet nor how many bytes are available for reading in the receiving socket. What TCP ensures is that the receiving socket will be provided with the data bytes in the same order that they were sent.
Consider a communication through a TCP connection socket between two hosts A and B.
When the application in A requests to send 256 bytes, for example, the A's TCP stack can send them in one, or several individual packets, or even wait before sending them. So, B may receive one or several packets with all or part of the bytes requested to be sent by A, and so, when the application in B is notified of the availability of received bytes, it's not sure that it could read at once the 256 bytes.
The only guaranteed thing is that the bytes B reads are in the same order that A sent them.

How to make two ways socket communication

I would like to make two way communication using TCP and UDP socket in linux. The idea is like the following. This is a kind of sensor network.
server side
while loop (
(1)check if there is incoming TCP control message
if yes, update the system based on control message
for all other time, keep spamming out UDP messages
)
client side
while (
keep receiving the UDP broadcast message
once it receives 100 UDP messages, it has to send a TCP control message to server
)
The (1) part is the only place that I cannot work out. I find that if I use non blocking TCP socket with select() on the (1) part for short interval, the system will soon return 0 and the control message is not received. Either I would set a long interval for select, but it will block the line and the UDP message cannot send it out. I want the UDP message sending out effectively , provided that the server can also notice the client TCP control message at any tinme.
Could anyone give me some hints on (1) part.
You should only attempt a recv() if the correspond readFD is set after select(). If select() returns zero, none of them is set: the timeout has expired, so you shouldn't do so anything except send your UDP message.