modbus TCP server with data from SQL-query using pymodbus - pymodbus

I need to make a SQL_to_Modbus TCP server. I looked at the https://pymodbus.readthedocs.io/en/latest/source/example/payload_server.html example, but I'm not sure that I understand correctly how to organize the transfer of data from the database to the modbus registers. The data changes dynamically and every master must be able to receive this data. I would be very grateful if you could tell me where in the code I should execute the SQL-query, get the data set and fill the server registers with them. Probably, this code should be executed in parallel with the main one and use synchronization?

Related

why is my ctp not getting any data for one of the tables i am subscribing to?

I have few ctps subscribing to a tp.
subscription is established with no problems but data doesn't seem to hit 1 of my ctps.
I have 2 ctps subscribing to the same table. one is getting data the other doesn't.
I checked. u.w and I can see the handles being open for the said table but when I check the upd on my ctp... it receives all other tables except this one.
upd on my ctp it's a simple insert. I cannot see any data at all for the table. the t parameter is never set to the the name of the table I am interested in. I don't know what else to check. any suggestions would be greatly appreciated. the pub logic is the default pub logic.
no errors in the tp.
UPDATE1: I can send other messages and I receive data from the tp for other tables. issue doesn't seem to persist I dr just prod. I cannot debug much in prod
Without seeing more of your code it's hard to create a good answer.
Couple things you could try:
Check if you can send a generic message (e.g. h(+;1;2)) from your tp to ctp via the handle in .u.w this will make sure the connection is ok.
If you can send a message then you can check if the issue is in your ctp. You can see exactly what is being sent by adding some logging to your upd function, or if you thing the message isn't getting that far, to your .z.ps message handler function, e.g. .z.ps:{0N!x;value x} will perform some very basic client side logging.
If you can't send a message down the handle in the tp then it's possible there's other network issues at play (although I expect you would be seeing errors in your tp if that was the case). You could check .z.W in your ctp in this case to see if the corresponding handle for the tp is present there.
Can also send a test update to your tickerplant and add logging along each step of the way if you really want to see the chain of events but this could be quite invasive.

Is there a way to configure postgresql to send records to a listener port

I am having a situation where I need to send new record in a specific table, to a tcp port on server.
Is there a chance to configure this in postgresql.conf file. I am also open to any suggestion of how to send data to the port from a trigger function.
Can anyone help me?
You can write a trigger in PL/Python that can do about anything with the new row, but that doesn't sound like a good idea. For example, it would mean that the INSERT would fail if the trigger function encounters an error, and it would make the inserting transaction last unduly long.
I think what you want is logical decoding with a plugin like wal2json.

PLC Data Logging System: Some basic questions

I am currently trying to work with PLC. I am using kepware data logger to collect the PLC log data. The output is like below:
Time Stamp Signal Signal O/P
20130407104040.2 Channel2.Device1.Group1-RBT1_Y_WORK_COMP_RST 1
20130407104043.1 Channel2.Device1.Group1-RBT2_Y_WORK_COMP_RST 0
........................
I have few questions:
1) What does that 'Channel', 'Device', 'Group', 'RBT1_Y_WORK_COMP_RST' means ? - What I have got from the PLC class presentation is that: RBT1 (which refers a robot) is a machine and 'Y_WORK_COMP_RST' is it's one signal and 1/0 is the signal state at particular timestamp (like 20130407104040.2). But, I could not get from log data file what is: 'Channel', 'Device1' and 'Group1' means ?
2) I learned from classes that 'PLC is a hard real time system'. However, from the log data file I am seeing that: the cycle time differs often. I mean some time it takes (say) 5 seconds, sometime 7 seconds. Why that ?
3) Does this log data taken by kepware is the actual machine output ? Or taken from the PLC program ?
NB: I am very new in this field and taken very few classes. So, may be my questions are stupid. Please help me by giving some basic not so technical answer.
1) Channel2.Device1.Group1... is the path where your KEPware data logger could find your RBT1. If you add another device with another technology you should get something like : Channel3.Device1.Group1....
This is totally internal to KEPware data logger and have nothing to do with your PLC. What interest you is the last part of the path : RBT1_Y_WORK_COMP_RST
2) Are your PLC and the PC running the KEPware data logger time synchronized ?
3) You are connected to a PLC so the KEPware data logger take data from it, then your PLC has to be setup to collect the output of your machine if you want to do so.
1) The channel is the type of communication, it may be several communication protocol, like modbus or devicenet or whatever kepware supports.
The device is the device Kepware communicate with
and the group is just some way to sort your items
items will refer to your plc address and let you name the item as you wish. This way you got an easy to read alias of your address.
2) Hard real time systems means the PLC must react to its input change within a certain amount of time (Ref: Wikipedia) Most of the time PLC are programmed in Ladder, Ladder is sequential and depending of the step the program takes it maybe longer or shorter. Also the timestamp comes from Kepware, not the PLC, so it depends on kepware scan time as well.
3)Kepware connects to the PLC and request PLC address with the output status.

SNMP : How to find a mac address in the network?

I've wrote a Perl script to query devices (switches) on the network, it's used to find an mac address over the LAN. But, I would like to improve it, I mean, I have to give to my script these parameters:
The #mac searched
Switch' IP
Community
How can I do to just give IP and community ?
I know that it depends on my network topology ?
There is a main stack 3-switches (cisco 3750), and after it's linked to other ones (2960), in cascade.
Anyone has an idea ?
Edit : I would like to not specify the switch.
Just give the #mac and the community.
You have to solve two problems... Where will the script send the first query... Then, suppose you discover that a mac address was learned through port 1/2/1 on that switch and that port is connected to another switch. Somehow your script must be smart enough to query the switch attached to port 1/2/1. Continue the same algorithm until you do not have a switch to query.
What you are asking for is possible, but it would require you to either give the script network topology information in advance, or to discover it dynamically with CDP or LLDP. CDP always carries the neighbor's ip address... Sometimes you can get that from LLDP. Both CDP and LLDP have MIB objects you can query.
You'll need two scripts basically. You already have a script to gather your data, but it takes too long to find a single MAC. Presumably you have a complete list of every switch and it's IP address. Loop over them all building a database of the CAM table. Then when you need to search for a MAC, just query your pre-built database. Update it about once an hour or so and you should maintain pretty accurate results. You can speed the querying of several devices by running multiple snmp walks in parallel.

Is there a Perl POE module for monitoring a database table for changes?

Is there any Wheel /POCO /Option to do this in Perl using the POE module:
I want to monitor a DB table for changed records (deleting /insert/ update) and react accordingly to those changes.
If yes could one provide some code or a link that shows this?
Not that I'm aware of, but if you were really industrious you could write one. I can think of two ways to do it.
Better one first: get access to a transaction log / replication feed, e.g. the MySQL binlog. Write a POE::Filter for its format, then use POE::Wheel::FollowTail to get a stream of events, one for each statement that affects the DB. Then you can filter the data to find what you're interested in.
Not-so-good idea: using EasyDBI to run periodic selects against the table and see what changed. If your data is small it could work (but it's still prone to timing issues); if your data is big this will be a miserable failure.
If you were using PostgreSQL, you could create a trigger on your table's changes that called NOTIFY and in your client app open a connection and execute a LISTEN for the same notification(s). You can then have POE listen for file events on the DBD::Pg pg_socket file descriptor.
Alternatively you could create a SQL trigger that caused another file or network event to be triggered (write to a file, named pipe or socket) and let POE listen on that.