PLC Data Logging System: Some basic questions - plc

I am currently trying to work with PLC. I am using kepware data logger to collect the PLC log data. The output is like below:
Time Stamp Signal Signal O/P
20130407104040.2 Channel2.Device1.Group1-RBT1_Y_WORK_COMP_RST 1
20130407104043.1 Channel2.Device1.Group1-RBT2_Y_WORK_COMP_RST 0
........................
I have few questions:
1) What does that 'Channel', 'Device', 'Group', 'RBT1_Y_WORK_COMP_RST' means ? - What I have got from the PLC class presentation is that: RBT1 (which refers a robot) is a machine and 'Y_WORK_COMP_RST' is it's one signal and 1/0 is the signal state at particular timestamp (like 20130407104040.2). But, I could not get from log data file what is: 'Channel', 'Device1' and 'Group1' means ?
2) I learned from classes that 'PLC is a hard real time system'. However, from the log data file I am seeing that: the cycle time differs often. I mean some time it takes (say) 5 seconds, sometime 7 seconds. Why that ?
3) Does this log data taken by kepware is the actual machine output ? Or taken from the PLC program ?
NB: I am very new in this field and taken very few classes. So, may be my questions are stupid. Please help me by giving some basic not so technical answer.

1) Channel2.Device1.Group1... is the path where your KEPware data logger could find your RBT1. If you add another device with another technology you should get something like : Channel3.Device1.Group1....
This is totally internal to KEPware data logger and have nothing to do with your PLC. What interest you is the last part of the path : RBT1_Y_WORK_COMP_RST
2) Are your PLC and the PC running the KEPware data logger time synchronized ?
3) You are connected to a PLC so the KEPware data logger take data from it, then your PLC has to be setup to collect the output of your machine if you want to do so.

1) The channel is the type of communication, it may be several communication protocol, like modbus or devicenet or whatever kepware supports.
The device is the device Kepware communicate with
and the group is just some way to sort your items
items will refer to your plc address and let you name the item as you wish. This way you got an easy to read alias of your address.
2) Hard real time systems means the PLC must react to its input change within a certain amount of time (Ref: Wikipedia) Most of the time PLC are programmed in Ladder, Ladder is sequential and depending of the step the program takes it maybe longer or shorter. Also the timestamp comes from Kepware, not the PLC, so it depends on kepware scan time as well.
3)Kepware connects to the PLC and request PLC address with the output status.

Related

zabbix interface wrong utilization of receive/sent

I'm struggling with my interface item measurements Send and Received Bits.
The item of network interface for measure the send and receive bits I have added preprocessing with Custom multiplier to 8 .
When I using the snmpwalk to get the current interface traffic, I got the value is:
IF-MIB::ifHCOutOctets.2 = Counter64: 11057731246261
But back to zabbix web monitoring system, it show only have for this interface sent, did anyone have these problem or findings can be fix it can provide?
My advice for you is to get inspired by the default zabbix templates and how things are set in them.
do not forget to set a correct unit for your item - these can make difference how the final value of the item is calculated:
https://www.zabbix.com/documentation/5.0/en/manual/config/items/item
as far as post-processing goes it seems to be set correcly in your case.
Your item may be incorrecly detected as unixtime and therefore you see date instead of bytes per second so I advise to put in bps in the Units field.

Logging a counter value to a batch name in siemens TIA Portal

I need to create a program for 1214 PLC in TIA Portal and a Comfort HMI that counts several products using a count up and stores that value to a specific batch name.
For every new batch, the operator would enter a new batch name, and the counter will count the products for that specific batch.
The count needs to be displayed on the HMI screen along with the history of batches and the associated final count number.
So basically, I need a way to attach a name (batch_id) to a final count and log that pair for later reference.
Can someone give me some advice as to how I would do that?
To clarify, I need help with storing and displaying the counter value and batch names, not with the counting itself.
I appreciate any help you can provide.
There are a few ways to do this (yes, you can use PLC data logs and no they don't have to create a separate file for each batch), but I am posting here what I would do, because it's convenient for data backups, I have taken this approach before, and know it works.
Write the count value (generated in the PLC), the batch value and the timestamp to a CSV file on a USB drive inserted into the Comfort HMI, using VBScripts on the HMI.
Split the files regularly - e.g. daily, weekly or monthly, to minimize the risk of any single file becoming corrupt and you losing the data. More detail follows.
Data Storage:
Count is calculated in the PLC. Batch ID and timestamp can be stored in the PLC (if you want it to be retentive after a power cut), or in the HMI.
You will have Comfort HMI tags representing each of these three values. Once a batch is complete, call a VB script that writes the values of these values to CSV file. There are application examples and forum entries on SIOS about this.
Data display as a table:
Read the CSV file values according to your filter criteria (day, time range, batch ID, batch ID range, etc) using a VB script. Write to internal HMI tags.
Display these internal HMI tags as IO fields on a Comfort panel screen. This is your custom-built table and yes it's the only way to do it unless you want to create a custom control and install it on the panel.
Backing up:
Disable logging and check USB is not in use using a script, e.g. this: https://support.industry.siemens.com/cs/document/89855157
Remove the USB, copy the files, re-insert it and activate logging again.
(you implement the 'disable' and 'activate' logging features, e.g. using an internal BOOL tag that prevents a script from executing).
There is a lot of info on SIOS about these topics, as Application Examples, FAQs and forum entries.
support.industry.siemens.com
The PLC log method works, but data backup and especially display can become a pain.

Are there a way to know how much of the EEPROM memmory that is used?

I have looked trough the "logbook" and "datalogger" APIs and there are no way of telling that the data logger is almost full. I found the API call with the following path "/Mem/Logbook/IsFull". If I have understood it correct this will notify me when log is full and the datalogger has stopped logging.
So my question is: Are there a way to know how much of the memmory is currently in use so that I do a cleanup old data (need to do some calculations on them before they are deleted) before the EEPROM is full and the Datalogger stops recording?
The data memory of Logbook/DataLogger is conceptually a ring buffer. That's why /Mem/DataLogger/IsFull always returns false on Movesense sensor (Suunto uses the same API in its watches where the situation is different). Therefore the sensor never stops recording, it just replaces oldest data with new.
Here are a couple of strategies that you could use:
Plan A:
Create a new log (POST /Mem/Logbook/Entries => returns the logId for it)
Start Logging (PUT /Mem/DataLogger/State: LOGGING)
Every once in a while create a new log (POST /Mem/Logbook/Entries). Note: This can be done while logging is ongoing!
When you want to know what is the status of the log, read /Mem/Logbook/Entries. When the oldest entry has completely been overwritten, it disappears from the list. Note: The GET /Entries is a heavy operation so you may not want to do it when the logger is running!
Plan B
Every now and then start a new log and process the previous one. That way the log never overwrites something you have not processed.
Plan C
(Note: This is low level and may break with some future Movesense sensor release)
GET the first 256 bytes of EEPROM chip #0 using the /Component/EEPROM API. This area contains a number of ExtflashChunkStorage::StorageHeader structs (see: ExtflashChunkStorage.h), rest is filled with 0xFF. The last StorageHeader before 0xFF is the current one. With that StorageHeader one can see where the ring buffer starts (firstChunk) and where next data is written (cursor). The difference of the two is the used memory. (Note: Since it is a ring buffer the difference can be negative. In that case add the "Size of Logbook area - 256" to it)
Full disclosure: I work for Movesense team

Simultaneously incrementing the program counter and loading the Instruction register

In my Computer Architecture lectures, I was told that the IR assignment and PC increment are done in parallel. However surely this has an effect on which instruction is loaded.
If PC = 0, then the IR is loaded and then the PC incremented then the IR will hold the instruction that was at address 0.
However if PC = 0, the PC incremented and then the IR is loaded and then the IR will hold the instruction that was at address 1.
So surely they can't be done simultaneously and the order must be defined?
You're not taking into account the wonders of FlipFlops. The exact implementation depends of course on your specific design, but it's perfectly possible to read the value currently latched on some register or latch, while at the same time preparing a different value to be stored there, as long as you know these values are independent (there's also a possibility of doing a "bypass" in more sophisticated designs, but that's besides the point here).
In this case, you'd be reading the current value of the PC (and using it to fetch the code from memory, or cache, or whatever), while preparing the next value (for e.g. PC+4 or some branch target if you know it). This is how pipelines work.
Generally speaking, you either have enough time to do some work withing the same cycle (incrementing PC and using it for code fetch), in which case they'll fit in the same pipestage, or if you can't make it in time - you just break these serial activities to two pipestages, so that they can be done in "parallel" because one of them belongs to the next operation flowing through the pipe, so there's no longer a dependency (aside from corner cases like branches or bubbles)

SNMP : How to find a mac address in the network?

I've wrote a Perl script to query devices (switches) on the network, it's used to find an mac address over the LAN. But, I would like to improve it, I mean, I have to give to my script these parameters:
The #mac searched
Switch' IP
Community
How can I do to just give IP and community ?
I know that it depends on my network topology ?
There is a main stack 3-switches (cisco 3750), and after it's linked to other ones (2960), in cascade.
Anyone has an idea ?
Edit : I would like to not specify the switch.
Just give the #mac and the community.
You have to solve two problems... Where will the script send the first query... Then, suppose you discover that a mac address was learned through port 1/2/1 on that switch and that port is connected to another switch. Somehow your script must be smart enough to query the switch attached to port 1/2/1. Continue the same algorithm until you do not have a switch to query.
What you are asking for is possible, but it would require you to either give the script network topology information in advance, or to discover it dynamically with CDP or LLDP. CDP always carries the neighbor's ip address... Sometimes you can get that from LLDP. Both CDP and LLDP have MIB objects you can query.
You'll need two scripts basically. You already have a script to gather your data, but it takes too long to find a single MAC. Presumably you have a complete list of every switch and it's IP address. Loop over them all building a database of the CAM table. Then when you need to search for a MAC, just query your pre-built database. Update it about once an hour or so and you should maintain pretty accurate results. You can speed the querying of several devices by running multiple snmp walks in parallel.