How to make two ways socket communication - sockets

I would like to make two way communication using TCP and UDP socket in linux. The idea is like the following. This is a kind of sensor network.
server side
while loop (
(1)check if there is incoming TCP control message
if yes, update the system based on control message
for all other time, keep spamming out UDP messages
)
client side
while (
keep receiving the UDP broadcast message
once it receives 100 UDP messages, it has to send a TCP control message to server
)
The (1) part is the only place that I cannot work out. I find that if I use non blocking TCP socket with select() on the (1) part for short interval, the system will soon return 0 and the control message is not received. Either I would set a long interval for select, but it will block the line and the UDP message cannot send it out. I want the UDP message sending out effectively , provided that the server can also notice the client TCP control message at any tinme.
Could anyone give me some hints on (1) part.

You should only attempt a recv() if the correspond readFD is set after select(). If select() returns zero, none of them is set: the timeout has expired, so you shouldn't do so anything except send your UDP message.

Related

Using UDP socket with poll. Blocking vs Non-blocking

A lot of examples can be found about non blocking TCP sockets, however I find it difficult to find good explanation about how UDP sockets should be handled with poll/select/epoll system calls.
Blocking or non blocking ?
When we have to deal with tcp sockets then it makes sense to set them to non blocking, since it takes only one slow client connection to prevent the server from serving other clients. However, there are no ACK messages in UDP, so my assumption is that writing to UDP should be fast enough for both cases. Does that mean that we can safely use blocking UDP socket with the family of poll system calls if each time we are going to send small amount of data (10Kb for example)? From this discussion I assume that ARP request is the only point that can substantially block the sendto function, however isn't it a one time thing?
Return value of sendto
Let's say the socket is non-blocking. Can there be a scenario that I try to send 1000 bytes of data, and the sendto function sends only some part of it (say 300 bytes)? Does that mean that it has just sent a UDP packet with 300 bytes, and next time I use sendto I have to consider that it will send in a new UDP packet again? Is this situation still possible for blocking sockets?
Return value of recvfrom
The same question applies for recvfrom. Can there be a situation that I will need to call recvfrom more than once to obtain the full UDP packet. Is
that behaviour different for blocking/non-blocking sockets.

Labview - Check if TCP read buffer contains more data

I've got a TCP Server that processes messages of the following structure:
[ Msg. Size (2 Byte) | Msg. Payload (N Byte) ]
The process is as follows:
Read 2 bytes from the TCP connection to identify payload size N.
Read N payload bytes and do something with it.
Close TCP connection.
To reduce networking overhead I'd like to piggyback multiple messages.
[ Msg. Size #1 | Msg. Payload #1 ][ Msg. Size #2 | Msg. Payload #2 ] ...
Obviously the processing loop must not close the TCP connection if the TCP read buffer contains more data (is not empty).
Is there any way to reliably check if more data is available in a TCP read buffer from within Labview 2013?
I could call read() again and check if it times out. But I'd like to avoid this solution since it introduces unwanted latencies.
In the processing loop described above standard Labview TCP VIs are used (e.g. TCP Wait On Listener, TCP Read, TCP Write, TCP Close Connection).
The client should shutdown the sending side of the connection as soon as it does not wish to send any more queries. The server should keep reading from the connection. If it detects that the other side has shut down the sending side, it can close the connection as soon as it has sent the final reply.
There is no need to wait for the read to timeout. A half-closed connection should be detected as soon as all data is read.
If for some reason you cannot support half-closed connections, you need some way to indicate the final request in the data that the server received. You can do this with a a special "I'm done" message. There are other ways.
By the way, you should not use the term "packet" to refer to application-level messages. You should use the term "message" to refer to an application-level unit of data that represents a single request or response.
You can wire a zero for the timeout time. Then you do not introduce unwarranted latencies.

Multiple clients connecting to same server port at the same time?

Does UDP allow two clients to connect at the same time to a server port?
DatagramSocket udp1 = new DatagramSocket(8000); // = localhost:8000 <-> ?
DatagramSocket udp2 = new DatagramSocket(8000);
What happens if udp1 and udp2 are created from two different IPs and send data at the same time?
Will it cause any issue?
Note: UDP doesn't really have a concept of "connect", just sending and receiving packets. (e.g. if making a TCP connection is analogous to making telephone call, then sending a UDP packet is more like mailing a letter).
Regarding two sockets arriving at the same UDP port on a server at the same time: the TCP/IP stack keeps a fixed-size receive-buffer for each socket that the server creates, and whenever a packet arrives at the port that socket is bound to, the packet is placed into that buffer. Then the server program is woken up and can recv() the data whenever it cares to do so. So in most cases, both packets will be placed into the buffer and then recv()'d and processed by the server program. The exception would be if there is not enough room left in the buffer for one or both of the packets to fit into it (remember it's a fixed-size buffer); in that case, the packet(s) that wouldn't fit into the buffer will simply be dropped and never seen again.

Will TCP connection lose packets?

Say Server S have a successful TCP connection with Client C.
C is keep sending 256-byte-long packets to S.
Is it possible that one of packets only receive part of it, but the connection does not break (Can continue receive new packets correctly)?
I thought TCP protocol itself will guarantee not lose any bytes while connecting. But seems not?
P.S. I'm using Python's socketserver library.
The TCP protocol does guarantee delivery. Thus (assuming there are no bugs in your code and in the TCP stack), the scenario you describe is impossible.
Do bear in mind that TCP is stream- rather than packet-oriented. This means that you may need to call recv() multiple times to read the entire 256-byte packet.
As #NPE said, TCP is a stream oriented protocol, that means that there is no guarantee on how much data bytes are sent in each TCP packet nor how many bytes are available for reading in the receiving socket. What TCP ensures is that the receiving socket will be provided with the data bytes in the same order that they were sent.
Consider a communication through a TCP connection socket between two hosts A and B.
When the application in A requests to send 256 bytes, for example, the A's TCP stack can send them in one, or several individual packets, or even wait before sending them. So, B may receive one or several packets with all or part of the bytes requested to be sent by A, and so, when the application in B is notified of the availability of received bytes, it's not sure that it could read at once the 256 bytes.
The only guaranteed thing is that the bytes B reads are in the same order that A sent them.

What's the difference between TIME-WAIT Assassination and SO_REUSEADDR

I was reading about using the SO_LINGER socket option to intentionally 'assassinate' the time-wait state by setting the linger time to zero. The author of the book then goes on to say we should never do this and in general that we should never interfere with the time-wait state. He then immediately recommends using the SO_REUSEADDR option to bypass the time-wait state.
My question is, what's the difference? In both cases you're prematurely terminating the time-wait state and taking the risk of receiving duplicate segments. Why is one good and the other bad?
TIME_WAIT is absolutely normal. It occurs after a TCP FIN on the local side followed by a TCP FIN ACK from the remote location. In TIME_WAIT you are just waiting for any stray packets to arrive at the local address. However if there is a lost or stray packet then TIME_WAIT ensure that TTL or "time to live" expires before using the address again.
If you use SO_REUSEADDR then you are basically saying, I will assume that there are no stray packets. Which is increasingly likely with modern, reliable, TCP networks. Although it is still possible it is unlikely.
Setting SO_LINGER to zero causes you to initiate an abnormal close, also called "slamming the connection shut." Here you do not respect TIME_WAIT and ignore the possiblity of a stray packet.
If you see FIN_WAIT_1 then this can cause problems, as the remote location has not sent a TCP FIN ACK in response to your FIN. So the process was either killed or the TCP FIN ACK was lost due to a network partition or a bad route.
When you see CLOSE_WAIT you have a problem, here you are leaking connections as you are not sending the TCP FIN ACK when given the TCP FIN.
I did some more reading and this is my understanding of what happens (hopefully correct):
When you call close on a socket which has SO_REUSEADDR set ( or your app crashes ) the following sequence occurs:
TCP Sends any remaining data in the send buffer and a FIN
If close was called it returns immediately without indicated if any remaining data was delivered successfully.
If data was sent the peer sends a data ACK
The peer sends an ACK of the FIN and sends it's own FIN packet
The peer's FIN is acked and the socket resources are deallocated.
The socket does not enter TIME-WAIT.
When you close a socket with the SO_LINGER time set to zero:
TCP discards any data in the send buffer
TCP sends a RST packet to the peer
The socket resource are deallocated.
The socket does not enter TIME-WAIT
So beyond the fact that setting linger to zero is a hack and bad style it's also bad manners as it doesn't go through a clean shutdown of the connection.
I have use SO_REUSEADDR to wildcard bind() to a local port for which some other program already had a connection open on. It turns out this particular use will never cause a problem so long as no two sockets try to listen() on the same addr/port combo at the same time.