Multiple identities on Serial RTU server(s) using pymodbus - pymodbus

I got my setup working with a Serial RTU server with 10 slave contexts. The sever has its identity setup according to manual. So when polling identity registers from any of the 10 addressed slaves, I get same response.
However, my application needs to emulate 10 separated devices on a common RS485 modbus communication line, that each has its own serial identity contents (like having different modbus devices from different vendors)
Is there a way to create an identity for each slave context? Or perhaps run 10 separate servers instances with individual identities and only one slave context each?
Kind Regards

Related

Two nodemcu's unable to communicate with raspberry pi using mqtt

Raspberry pi is acts as local host i'm trying to send data to raspbberry pi using mqtt with node mcu with two different topics.
eg:
if x>10 then i send 1 otherwise 0
same logic i have used in both node mcu.
if im communicate with only one nodemcu it getting good response but if i connect with both nodemcu's then sometimes not getting value in raspberry pi console.
This often depends on both the client and the broker used, and your configuration of each. The fact that two have problems where one does not suggests a client ID collision: every MQTT client device must have a different client ID. If a broker receives subscriptions from two clients with the same ID, the broker may disconnect one, usually the first. If each client is configured to reconnect, this can cause an endless series of disconnects for both, each of them connected half the time.
Any broker that does not disconnect duplicate clients could still fail to deliver to one, because it uses the client IDs to track which clients a message has been delivered to. The first client that pings for messages on its subscriptions will receive the latest message, and any later ones will miss that message because the message is already marked as delivered to that client ID.
Most clients avoid these problems with random IDs, yet let the developer set one manually. Does your identical logic set a client ID? You can verify what is actually set on each device through the broker's logs.

Modbus client and server with message forwarding

I am working on a legacy modbus program for an industrial SCADA system.
Currently, the c++ program acts as both a modbus TCP server and client.
Client behaviour:
It reads from a number of vendor PLCs (servers) on site, performs calculations and sends control commands back to the PLCs based on the data received across the site.
Server behaviour:
responds to a variety of TCP read and write requests from web interfaces and laptops on site.
Until now, this has worked fine, but we have recently installed a logging client on the network which polls our program very frequently (sub-second) and this has revealed timing issues: the program can potentially take a very long time in its client loop performing calculations and reading PLC values before acting as a server and responding to incoming requests.
Easy solution would be to split the programs into a modbus server and client instance, and keep them both running on the same embedded PC.
The issue I have is that the remote web interface (HMI) must be able to control the behaviour of vendor PLC 2 and Vendor PLC 2 will only allow one TCP connection from the embedded PC. In the past the program has handled writes requests from the HMI by forwarding them on to the PLC 2 via the open socket.
I'd be keen to gather thoughts on best practices here.
My thinking:
the modbus server program will need to respond to the HMI requests and somehow store the information required for vendor PLC 2, and it will also need to set a status register to inform the modbus client that there is data for vendor PLC 2.
The modbus client program will need to read the status register (and data) from the server and pass this on to vendor PLC 2.
Am I heading in the right direction?
Without having details on your implementation I can only guess the problem is that your program is single-threaded, and delays are caused by waiting responses from PLCs.
So, if my assumption is correct, you need to switch to 'select' function and redesign your software to be totally async. You have to put all sockets (both connected and accepted) in a FDs set and wait events on them.
win32:
https://learn.microsoft.com/en-us/windows/desktop/api/winsock2/nf-winsock2-select
linux:
https://www.opennet.ru/cgi-bin/opennet/man.cgi?topic=select&category=2
I've written the same app ages ago on win32 (but without calculations) and it easily processed about 200 PLCs, working on the same machine with SCADA.

Communicate with a microcontroller over ethernet

I am planning to make some microcontroller boards which would do miscellaneous tasks. For example measuring analog voltages or controlling other instruments. Each board needs to be controlled/downloaded it's data from one place. For that purpose I would use an ethernet interface and do the comnunication over that. So my question is: which would be the most suitable method of achieving that. My ideas are: run a webserver on each module and communicate with POST/GET, or run a telnet server on boards and communicate with a telnet client. The security and speed/latency is not an concern but the data integrity is.
I don't need a html based gui for modules because I will implement an application which will communicate wizh the modules periodically, gets the data from them and stores in a database. And the database is what I will use later, for examining the data for example.
An other example:
I have a board which measures measures temperature. There is a server on the board itself run by the mcu. It is connected to a router via the ENC chip. I have my pc also connected to that router. I have an application which connects to the server run by the Atmega328 and collects the data then stores to a database. It repeats this let's say in every hour. I would use an Atmega328 and an ENC28J60 ethernet interface chip. What do you recommend?

Manage serial port from two processes simultaneosly

I have the following scenario:
Rasperry pi connected to a device via Serial port
3g Dongle connected to the raspberry (with the ability to make/recieve calls)
One process reading the data from the serial port and redirecting it to a server (using 3g)
Another process waiting for a incoming call, and when someone calls the program takes the data from the serial port and redirect it via the 3g dongle using AT commands ( like fax-call). When someone calls, the call is made using AT commands and the caller should be able to "speak" with the final device connected to the serial port.
The problem is that the two processes can not live together since they are using the same serial port, and when one process is already started, the other can not read the data from the serial (port busy).
Is there a way to achieve this ? Can i make like a "fake" serial port, or something that redirects the data?
Thank you very much
You may write a single service that communicates with the real serial port but offers two virtual serial ports itself like described here Virtual Serial Port for Linux
Like all good GSM things there's a specification for that :)
GSM 07.10 is the specification and there are libraries out there for some time that can support you. Some are libraries you can build into your server systems and some are actual daemons.
A quick google for "gsm multiplexing" will get your started that I am sure.
I had a similar problem managing a serial port between some independent processes. Finally my best solution was to use Redis to listen for calls to the port.
The process that wants to send something, publishes through redis, in the channel 'uart_request' a json with the arguments to make the call to the serial port, and also a hash ('hash_message') made with the timestamp. Just before posting the json, the process subscribes to 'hash_message'.
Finally, a process listens for the posts in 'uart_request'. When a post arrives, take the 'hash_message' from the json, make the call to the serial port, wait for the response and post the response in 'hash_message'.
The point is that there is just one process that control the serial port, so its not necessary to open and close. Works really good.

Poor UDP broadcast performance to multiple processes on same PC

We have an application that broadcasts data using UDP from a server system to client applications running on multiple Windows XP PC's. This is on a LAN, typically Gigabit. This has been running fine for some years.
We now have a requirement to have two (or more) of the client applications running on each quad core PC, with each instance of the application receiving the broadcast data. The method I have used to implement this is to give each client PC multiple IP addresses. Each client app then connects to the server using the same port number but on a different IP. This works functionally but the performance for some reason is very poor. My data transfer rate is cut by around a factor of 10!
To get multiple IP addresses I have tried both using two NIC adapters and assigning multiple IP addresses to a single NIC in the advanced TCP/IP network properties. Both methods seem to give similarly poor performance. I also tried using several different manufacturers NICs but that didn't help either.
One thing I did notice is that the data seems to come over more fragmented. With just a single client on a PC if I send 20kBytes of data to the client it almost always receives it all in one chunk. But with two clients running the data seems to mostly come over in blocks the size of a frame (1500 bytes) so my code has to iterate around more times. But I wouldn't expect this on its own to cause such a dramatic performance hit.
So I guess my question is does any one know why the performance is so much slower and if anything can be done to speed it up?
I know I could re-design things so that the server only sends data to one client per PC, and that client could then mirror the data on to the other clients on the same PC. But that is a major redesign and re-coding effort so I'd like to keep that as a last resort.
Instead of creating one IP address for each client, try using setsockopt() to enable the SO_REUSEADDR option for each of your sockets. This will allow all of your clients to bind to the same port on the same host address and receive the broadcast data. Should be easier to manage than the multiple NIC/IP address approach.
SO_REUSEADDR will allow broadcast and multicast sockets to share the same port and address. For more info see:
SO_REUSEADDR and UDP behavior in Windows
and
Uses of SO_REUSEADDR?