I am working on a legacy modbus program for an industrial SCADA system.
Currently, the c++ program acts as both a modbus TCP server and client.
Client behaviour:
It reads from a number of vendor PLCs (servers) on site, performs calculations and sends control commands back to the PLCs based on the data received across the site.
Server behaviour:
responds to a variety of TCP read and write requests from web interfaces and laptops on site.
Until now, this has worked fine, but we have recently installed a logging client on the network which polls our program very frequently (sub-second) and this has revealed timing issues: the program can potentially take a very long time in its client loop performing calculations and reading PLC values before acting as a server and responding to incoming requests.
Easy solution would be to split the programs into a modbus server and client instance, and keep them both running on the same embedded PC.
The issue I have is that the remote web interface (HMI) must be able to control the behaviour of vendor PLC 2 and Vendor PLC 2 will only allow one TCP connection from the embedded PC. In the past the program has handled writes requests from the HMI by forwarding them on to the PLC 2 via the open socket.
I'd be keen to gather thoughts on best practices here.
My thinking:
the modbus server program will need to respond to the HMI requests and somehow store the information required for vendor PLC 2, and it will also need to set a status register to inform the modbus client that there is data for vendor PLC 2.
The modbus client program will need to read the status register (and data) from the server and pass this on to vendor PLC 2.
Am I heading in the right direction?
Without having details on your implementation I can only guess the problem is that your program is single-threaded, and delays are caused by waiting responses from PLCs.
So, if my assumption is correct, you need to switch to 'select' function and redesign your software to be totally async. You have to put all sockets (both connected and accepted) in a FDs set and wait events on them.
win32:
https://learn.microsoft.com/en-us/windows/desktop/api/winsock2/nf-winsock2-select
linux:
https://www.opennet.ru/cgi-bin/opennet/man.cgi?topic=select&category=2
I've written the same app ages ago on win32 (but without calculations) and it easily processed about 200 PLCs, working on the same machine with SCADA.
Related
I am working on an automation project using Raspberry pi and Windows IoT. Is it possible to broadcast to a web page, similar to Server-Send-Event? I am monitoring certain events and instead of calling server every few seconds for updates, I would like server to send the alert to web page direct. Any help would be greatly appreciated.
I think you can use WebSockets. WebSockets are an advanced technology that makes it possible to open an interactive communication session between the user's browser and a server. You can refer to this sample. Or you can use IoTWeb to embed a simple HTTP and WebSocket server into your application.
Update:
WebSockets are a great addition to the HTTP protocol suite, but there are numerous situations where they cannot be used.
Some companies have firewalls that will prevent WebSockets from
working.
If you are deploying software in a shared hosting
environment, you may not be permitted to use WebSockets.
If you are
behind a reverse proxy that isn’t configured or the software doesn’t
support pass-through of WebSocket protocol, WebSockets won’t work.
Another option is long polling,the browser does an XHR request and the server simply doesn’t respond until it has something to send. But in this way, if you want to do 2-way communications with the server, you are effectively using 2 sockets. One is tied up hanging/waiting for the long poll response, and the other is sent by the client to send new information to the server. Long polling is also problematic because the client has to be able to handle XHR errors, some of which are tricky to handle or even impossible to handle. You can search more differences and disadvantages from internet.
Is there a conventional way to write a program such that commands can be issued to the program from the command line without a repl? For example, how you can send commands to a running nginx server using sudo /etc/init.d/nginx restart (or any other valid command besides restart)
One idea I had was having the long-running program create and monitor a unix socket that other programs can write to to send it commands. Another was to create a local server with a REST interface that can be sent commands that way, though that seems a bit gross.
What's the right way to do this?
Both ways are ok, and you could even consider using some RPC machinery, such as making your application serve JSONRPC on some unix(7) socket. Or use a fifo(7). Or use D-Bus.
A common habit on Unix is to have applications reload their configuration files on e.g. SIGHUP signal, and save some persistent state (before terminating) on SIGTERM. Read signal(7) (notice that only async-signal-safe routines can be called fro signal handlers; a good way is to only set some volatile sig_atomic_t variable inside the handler and test it outside). See also POSIX signal.h documentation.
You might make your application become a specialized HTTP server (e.g. using some HTTP server library like libonion) and give it some Web interface (or REST, or SOAP ...); the user (or sysadmin) will then use his browser to interact with your application.
You could make your server systemd compatible. (I don't know exactly what that requires, it is perhaps D-bus related).
You could embed some command interpreter (like Guile and Lua) in your app and have some limited kind of REPL loop running on some IPC like a socket or a fifo. Beware of nasty code injection.
I had a similar issue where I have a plethora of services running on any number of machines and each is in need of communicating with several others.
My main problem was not so much the communication between the services. That can be done with a simple message sent over a connection (as Basile mentioned, it can be TCP, UDP, Unix sockets, FIFOs...). However, when you have over 20 services, many of which need to communicate with several other services, you start having a headache on how to get all the connections right (I have such a system, although it has a relatively limited number of services, like just 10 and that's already very complicated).
So I created a process (yet another service) called Communicator. All services connect to the Communicator service and when they need to send a message, they include the name of the service they want to reach. The Communicator service is in charge of sending the message to the right place—i.e. it could be to another Communicator service running on a different computer. Communicator has a graph of all the services available on your network and knows how to send messages to them without your service having to know anything about all of that. Computing a graph can be really complex.
For the purpose, I created the eventdispatcher project. It is in C++, which may not be what you're interested in, although you could use it in other languages that interface with C/C++. The structure of the messages are "proprietary" (specific to the Communicator), but you can create any message you want. A message includes a name and parameters (param-name=value). The first version has a simple one line text communication system. The newer version accepts JSON as well (still must be one line of text per message).
The system supports TCP, UDP, Unix sockets, FIFO, and between threads, you can have thread safe fifos. It also understand signals (like SIGHUP, SIGTERM, etc.) It has a specific connection to listen for the death of a thread. It supports encryption over TCP via OpenSSL. The messages can automatically be dispatched (hence the current name of the library). Connections are assigned a timer. And there are CUI and GUI (Qt) extensions as well.
The one main point here is that all your connections can be polled (see poll()) and thus you can implement a system that reacts to events instead of a system which sleeps and checks for events, sleeps and check, etc. or worth, you have a single blocking connection and everything has to happen on that one connection or your service gets stuck. This is one reason Unix has been using signals since early version of Unix did not have select() nor poll().
I am presently working on a client-server solution to transfer files to another machine via a socket network connection. I am fairly new to the whole client-server thing and therefore have the following - admittedly very basic - question:
For the file transfer, does it make any difference if I am sending the file from a client to a server or from a server to a client?
Any qualified insight into this will be much appreciated!
For the file transfer, does it make any difference if I am sending the file from a client to a server or from a server to a client?
Basically, no it does not matter. Once you have the connection made you are free to send data in both directions. Although you have to consider that a server won't accept data that is sent to it unless it explicitly reads from the socket.
To be a bit more general, server and client are completely arbitrary for a home brewed implementation of data transfer. If you boil this down to the simplest concept then you are just opening a socket and writing data to it on one side, and on the other side you are reading from a different socket.
You might choose to implement a single client program capable of connecting other clients (P2P) and sending files back and forth. In that case you could call the "server" the program that is currently sending the file, and the "client" is the program currently receiving.
Alternatively, you could implement two programs, one for client and one for server. Your server will listen for connections and the client will decide when it wants to connect to the server.
Remember that there are network limitations for connecting. If the program that is listening for connections is behind a firewall then you have to be sure you are forwarding the correct ports. If you are connecting machines within a LAN then you probably have nothing to worry about.
This Morning I Booted my Computer and Had multiple applications needing an update. While I was waiting for the applications to update, a question came to mind which I thought I'd ask in here.
The question is How does each application known which internet data being retrieved is theirs?
The applications don't even care about it, they let the Kernel sort out that information.
When an application establishes a connection with a remote computer, the Kernel assigns that application a local port, which is a number among 0-65535. This port on the receiving end can either be requested by the application or the kernel will assign a random port. Generally there is only one application per port, however it is possible for multiple applications to receive the same data, though this is rare.
When a packet is received by the network interface, the kernel will sort the packet by its destination port. There will be a table in the kernel mapping ports to processes, and each application will receive the relevant data without caring about any other possible data that could be coming in the computer.
If you are a programmer, you can learn about all this stuff by reading about socket programming:
http://en.wikipedia.org/wiki/Network_socket
is a good place to start. You can also google "socket programming" with your preferred programming language to get an idea of how this is set up on the programming end.
How can I interact with a PLC to send and receive data to/from a remote server(PC).
For example a robot that have a PLC and want to interact with a server that placed at near room with a wireless communication.
Data must move all over the time. PLC sending the data to the server and server must sending back the result of computation on the data to PLC.
review : My PLC brand is Delta but its model has not been selected yet .
A common scenario is that the manufacturer provides an OPC server for the PLC. You should check their website once you know the model. Then it is just a matter for you to create an OPC client.
A good way to start is to get a free OPC server simulator like the one from Matrikon. It doesn't need any hardware. OPC is a standard interface (although there are often some minor variations between manufacturers) - if you can get your client to work with the test server you can probably get it to work with the PLC.