OPC DA disconnect and reconnect automatically after 20-30 seconds - opc

We are having AIM.OPC Server, which is connected to a firewall and then to a data diode.
After several hours, AIM.OPC disconnects at Data diode and reconnect automatically after 20-30 seconds.
This repeating all the time. 6-7 times in a day.
When we capture NCAP using Wireshark, below message we received.
192.178.192.1 192.178.168.2 TCP 60 49157 -> 59021 {FIN, ACK} SEW=273 Ack=740 Win=65024 Len=0.
here 192.178.192.1 is my OPC Client
192.178.168.2 is my Data diode station.
Can someone help me to resolve this issue, not to disconnect from AIM.OPC

Related

Why is UDP packet reception seemingly optimized mid-execution?

I'm running a client server configuration over Ethernet and measuring packet latency at both ends. The client (windows) is sending packets every 5 ms (confirmed with wire shark) as it should. Yet, the server (embedded linux) only receives packets at 5 ms intervals for a few seconds, at which point it stops for 300 ms. After this break the latency is only 20 us. After another period of about a few seconds it takes another break for 300 ms. This repeats indefinitely (300ms break, 20 us packet latency burst). It seems as if the server program is being optimized mid-execution to read IO in shorter bursts. Why is this happening?
Disclaimer: I haven't posted the code, as the client and server are small subsets of more complex applications, however, I am willing to factor it out if an obvious answer doesn't present itself.
This is UDP so there is no handshake or any flow control mechanism. Those 300 ms must be because of work the server is doing in the processing of the UDP messages received. During those 300 ms the server has surely lost around 60 messages that were not read from client.
You might probably want to check the server does not take more than 5 ms in processing each message if it uses one thread to process. If the server uses multi-threading to process the messages and the processing takes some time, even if it takes 1 ms, you might be in a situation where at some point all threads are competing for resources and they don't finish in time to read the next message. For the problem you are describing I would bet the server is multithreaded and you have that problem. I cannot assure that 100% for lack of info though. But in any case, you want to check the time it takes to process messages because you might be dealing with real-time requirements.
I spaced out the measurements to 1 in every 1000 packets and now it is behaving itself. I was using printf every 5ms which must've eventually filled the printf tx queue entirely. This then delayed the execution for 300ms. Once printf caught its breath, the program had a queue full of incoming packets and thus was seemingly receiving packets every 20 us.

What is the overhead traffic of a TCP connection (plus TCP clarifications)?

We have a TCP connection.
Nothing is sent over; how many traffic(bytes) are needed for each second to keep that connection open?
What is the duration of opening a connection from a client in South America to a server in North Europe?
If I have to send small amount of data (max 256bytes) at x seconds interval, what would be x for which is better to close the connection and reopen again instead of keeping the connection always open?
I do not expect exact data - estimates will suffice.
1) none.
2) some time. Try it and see. For a rough estimate, ping one end from the other and double it.
3) try it. It depends on bandwidth and, more importantly, latency. These vary over wide ranges. Usually, it's better, speed-wise, to keep connections open. 256 bytes at intervals of seconds? I would keep the connection open, especially over paths with possibly high latency, (eg. intercontinental).
1. According to the TCP/IP standard, nothing. However, depending on the network conditions and any middleboxes (NAT devices, firewalls, etc.), a connection with no data going over it may be dropped. That could be a staic timeout (say two minutes, or ten minutes, or an hour), or it could be based on a least-recently-used table in some device.
2. It depends on a lot of factors, and the biggest delay may be from the client's local network rather than the intercontinental connection. However, the surface of the earth between the points is about 40 light-millisenconds, so (without TCP Fast Open) that would be 120 ms for the first data packet to get from the client to the server and 40 ms for the response, 80 ms more than in an active connection.
3. Assuming no broken middleboxes, always better to keep the connection open. However, the delay to recover from a "silently dropped" connection may be a lot longer than the time to open a new one; it might be appropriate for the client to manage its own timeout (on hte order of a second or so), and open a new connection and retry the last message if it hasn't gotten a response by then. Depends on what you're sending; transactional messages might merit such explicit fast retry more than a remote copy of syslog.

tcp keep alive basic query

I have a tcp socket for my app. TCP keep alive is enabled with a 10 seconds freq.
In addition, I also have msgs flowing between the app and the server every 1 sec to get status.
So, since there are msgs flowing anyway over the socket at a faster rate, there will be no keep alives flowing at all.
Now,consider this scenario: The remote server is down, so the periodic msg send (that happens every 1 sec) fails 3-5 times in a row. I dont think by enabling tcp keep alives, we can detect that the socket is broken, can we?
Do we have to then build logic in our code to ensure that if this periodic msg fails a certain number of times in a row, the other end is to be assumed dead?
Let me know.
In your application it makes no sense to enable keep alive.
Keep alive is for applications that have an open connection, and don't use it all the time, you are using it all the time so keep alive is not needed.
When you send something and the other end has crashed, TCP on the client will send all retransmissions with an increasing timeout. Finally if you have a blocking socket, you well get an error indication on the send operation where you know that you have to close the socket and retry a connection.
An error indication is where the return code of the socket operation is < 0.
I don't know the value of these timeouts by heart but it can go up to a minute or longer.
When the server is gracefully shutdown, meaning it will close its send of the socket, you will get that information by receiving 0 bytes on your receiving socket.
You might wanna check out my answer of yesterday as well :
Reset TCP connection if server closes/crashes mid connection
No, you don't need to assume anything. The connection will break either because a send will time out or a keep alive will time out. Either way, the connection will break and you'll start getting errors on reads and writes.

How to send Udp packet 2 or 3 times after failed received packet in java?

I have send Udp packet to the Server. If the server is OK then I can receive the response packet nicely but when the server is down then I did not get any response packet. Anybody can help me that how can I send my packet to server multiple time when fail to receive the response packet. Moreover, want to keep alive the connection with server. Thanks in advance.
Well,
After you've sent the packet, you wait for the ACK (response) package from the server. You could use the DatagramSocket.setSoTimeout() to an appropriate time, if you get the Timeout Exception increment a counter, if that counter is less than 2/3 send the packet again and repeat these steps. If the counter is bigger than 2/3 the server is down, just quit.
According to Java documentation, receive will block until a package is received or a timeout has expired.
To keep the connection alive you need to implement a ping-pong. In another thread of your program, you send a Keep-Alive packet (any small packet will do) and wait for a response. I suggest using a different port number for this purpose so that these packets won't mess up with the normal data packets. These packets can be send every 2 seconds o 2 minutes depends on your particular needs. When the thread receives the ACK packet it will update a private time variable with the current time, for example:
lastTimeSeen = System.currentTimeMillis();
put a method in your thread class to access the value of that variable.

Connection refused sockets. Normal behavior?

I have a socket server which accepts multiple connections from various clients. I'm testing it on localhost with a client application which connects - sends data and closes connection 10 times every 10 ms. Some times the test client raises an error: Connection refused by the remote server or something similar.
Is this a normal behavior of the server application ?
10 connects every 10mS is one connection per millisecond, which seems a rather fast rate. Are these connection attempts being made in parallel? If so, perhaps you are filling up the server's listen() backlog-queue; IIRC clients who try to connect while the backlog-queue is full will get a connection-refused error.
To test that hypothesis, try passing in larger or smaller numbers as the second argument to listen() on your server, and see if that makes the connection-refused error occur more or less often.
I'm with Jeremy. You didn't mention the protocol, but I assume it's SOCK_STREAM. It will take longer than 10ms to do the tcp handshake on anything but the most local connection, eventually causing a backlog (and subsequent connection refused error) no matter how high you set your listen backlog to.
You'd be way ahead if you could keep the connection open, and not close it down during each of your computation cycles.