OPC Publisher doesn't send data in order as in generated by OPC simulation server - opc-ua

I have been trying to retrieve sensor data generated by OPC simulation server (data listed in excel file and read by OPC simulation) in to one of the custom modules in Azure IOT Edge. When the data logged in the console it shows me that data has not been logged in order. Following is the JSON for OPC publisher hosted in iot edge as a module.
"OPCPublisher": {
"version": "1.0",
"type": "docker",
"status": "running",
"restartPolicy": "always",
"settings": {
"image": "mcr.microsoft.com/iotedge/opc-publisher:2.8",
"createOptions": {
"Hostname": "publisher",
"Cmd": [
"publisher",
"--pf=/appdata/publishednodes.json",
"--lf=/appdata/publisher.log",
"--aa"
],
"HostConfig": {
"Binds": [
"/home/sineth/iiotedge:/appdata"
]
}
}
}
}
Following is the published nodes json in gateway device.
Following is the screenshot of my excel sheet data
But the OPC publisher will not route the data in to modules in order that starting from anywhere but in order .
For an example it sends starting from the row ,value 11 for Tag11 and then again sends the next row which has the value 17 for tag 11. And sometimes sends a batch of data. no proper order.
This is not a issue with OPC server simulation since i have tested Simulation server with a standalone OPC client and it gets the data in order. Excel is read by simulation server.
Following image is a screenshot of my IoT edge module(python) where i log the data to console retrieving from OPC Publisher routing.
Appreciate any help on this.
Thanks a lot.

Adding summary from GitHub Issues discussions here:
OPC Publisher generate a unique message id, for each OPC UA endpoint (auto increasing by one)
python client code above's logs the same message more than 3500 times
Receiving of message don't seems to block and therefor is handling the same message over and over again
receive_on_message_input is deprecated and should not be used anymore, see API documentation
Without the duplicates all value changes are in order but the behavior is still not what the OP needs.
More than one message (containing value changes for all three tags) is batched
OPC Publisher try to optimize for cost and performance, sending each message at a time is neither of them, but it is possible to configure OPC Publisher in a way to send data directly by setting the batch size to one.
command line argument --bs=1
Not starting by the first value
OPC Publisher establish a connection to OPC UA server and creates monitored items, for every OPC UA nodes in it's config file. As default the OPC UA monitored item, will send a default notification with the current value. If you want to ignore it, you could use skip first.
command line argument --sk=true
But in the case described above also the first value is relevant. If the first message (message id = 1) don't contain the first value, then the OPC server simulation changed them before.
Please be aware that the OPC Publisher can only publish once the OPC UA client/server connection is fully established (including trusting of certificates), the subscriptions and the monitored items are created. This time is also depending on the performance of OPC UA server and network.
Proposals:
Change OPC Simulation to only start simulation sequence once a client connection is fully established
Retrieve the same message multiple times
If the messages are received multiple times it could be an error with the routing from the messages from one IoT edge module to another. Please make sure to explicitly name the sending module (in this case the OPC Publisher)
"$edgeHub": {
"properties.desired": {
"schemaVersion": "1.2",
"routes": {
"opcPublisherToPyDataConsumer": "FROM /messages/modules/opc-publisher INTO BrokeredEndpoint(\"/modules/PyDataConsumer/inputs/fromOPC\")"
}
}
}

So some questions:
What version of iotedge are you using?
Is it just that logs are not in order or are messages being received out of order?
What protocol are you using MQTT or AMQP?

Related

Agent can't be in several flowcharts at the time. At least two flowchart blocks are in conflict:

Suppose I have the following supply chain model see model model1
Agents are communicating with each other through a defined network and send messages to each other through ports. for example, demand is generated for customers through their ports and send as "orders" upstream to facilities. Upstream facilities send "shipments" to downstream facilities
and stats are collected at each node.
The model seems to work for 2 echelons but when one facility is connected to two facilities downstream as desired I get the following error "Agent can't be in several flowcharts at the time. At least two flowchart blocks are in conflict" see error. Based on the description it seems the agent "shipment" is sent to two facilities at the same time.
My question is how could I avoid this conflict?
more information about each node:
Agents' "orders" enter through each node's port and are capture as Enter. take(msg), follow a flowchart, and exit as Agent "shipment" to each destination. Each agent "order" has a double amount and port destination. see facility node
any suggestions please?
You must make sure that you do not send agents into a flowchart that is already in another flow chart, correct. This is bad model design.
One way to debug and find the root issue: before sending any message agent, check currentBlock()!=null and traceln the agent and the block. Also pause the model.
You can then see where you want to (re)send that agent that is already in some other flowchart block.
You probably send message agents out that are still somewhere else.
PS: For messages, you probably do not want to use flow charts at all but normal message passing. This avoids these pains here as you can easily send the same message to several agents. Check how message passing is done in the example agent models

OPC UA Client MonitoredDataItem - What is MonitoredDataItem MonitoringMode.Reporting vs MonitoringMode.Sampling?

The sampling rate is bound by Subscription's PublishingInterval.
What is the difference between the two MonitoringModes? I can't find any information anywhere that describes them.
I can't find any information anywhere that describes them.
OPC UA Spec Part 4 describes them in detail, but the TL;DR version is: Reporting means the client is actually sent data change notifications for these items, Sampling means the item is only sampled and value changes put into the queue each monitored item has, but changes are not actually reported to the client.

Transferring .csv files through XBee Modules

We have set up a Monitoring System that can collect data. The system consists of several RPi's with attached accelerometers that log the data to a .csv file.
The RPi's are so spread out that they are not in reach of eachother and their own created PiFY.
We use XBee S1 with Digimesh 2.4 for increased range to give the RPi's commands through XCTU. The XBee modules are set up as Routers. We can start and stop data collecting.
Now we are interested in transferring the collected data (.csv file) to a Master RPi. How can it be done through these XBee modules?
I'd recommend doing any coding in Python, and using the pyserial module to send/receive data on the serial port. It's fairly simple to get started with that.
Configure the routers in "AT mode" (also called "transparent serial mode") via ATAP=0 with DL and DH set to 0 (telling it to use the coordinator as a destination for all serial data.
Simple Coordinator Solution
Have the routers include some sort of node ID in each CSV record, and then configuring the coordinator in "AT mode" as well. That way it will receive CSV records from multiple sources and just dump them out of its serial port. As long as you send complete lines of data from each router, you shouldn't see corrupted CSV records on the coordinator.
More Complicated Coordinator Solution
Configure the coordinator in "API mode" via ATAP=1. Pick a programming language your comfortable with, like C, Java or Python and grab one of Digi's Open Source "host libraries" from their GitHub repository.
The coordinator will receive CSV data inside of API frames so it can identify the source device that sent the data. With this configuration, you can easily send data back to a specific device or make use of remote AT commands to change I/O on the routers.
Note that with either setup, there's no need for the RPi to create the file -- it can just send a CSV line whenever it has data ready. Just make sure you're staging a complete line and sending it in a single "serial write" call to ensure that it isn't split into multiple packets over the air.

Can ZMQ publish message to specific client by pub-sub socket?

I am using pub/Sub Socket and currently the server subscribe byte[0] (all topics)
while client subscribe byte[16] - a specific header as topic
However, I cannot stop client to subscribe byte[0] which can receive all other messages.
My application is a like a app game which has one single server using ZMQ as connection
and many clients have a ZMQ sockets to talk with server.
What pattern or socket I should use in this case?
Thanks
" ... cannot stop client to subscribe byte[0] which can receive all other messages."
Stopping a "subscribe to all" mode of the SUB client
For the ZMQ PUB/SUB Formal Communication Pattern archetype, the SUB client has to submit it's subscription request ( via zmq_setsockopt() ).
PUB-side ( a Game Server ) has got no option to do that from it's side.
There is no-subscription state right on a creation of a new SUB socket, thus an absolutely restrictive filter, thas no message pass through. ( For furhter details on methods for SUBSCRIBE / UNSUBSCRIBE ref. below )
ZeroMQ specification details setting for this:
int zmq_setsockopt ( void *socket,
int option_name,
const void *option_value,
size_t option_len
);
Caution: only ZMQ_SUBSCRIBE
ZMQ_UNSUBSCRIBE
ZMQ_LINGER
take effect immediately,
other options are active only for subsequent socket bind/connects.
ZMQ_SUBSCRIBE: Establish message filter
The ZMQ_SUBSCRIBE option shall establish a new message filter on a ZMQ_SUB socket. Newly created ZMQ_SUB sockets shall filter out all incoming messages, therefore you should call this option to establish an initial message filter.
An empty option_value of length zero shall subscribe to all incoming messages.
A non-empty option_value shall subscribe to all messages beginning with the specified prefix.
Multiple filters may be attached to a single ZMQ_SUB socket, in which case a message shall be accepted if it matches at least one filter.
ZMQ_UNSUBSCRIBE: Remove message filter
The ZMQ_UNSUBSCRIBE option shall remove an existing message filter on a ZMQ_SUB socket. The filter specified must match an existing filter previously established with the ZMQ_SUBSCRIBE option. If the socket has several instances of the same filter attached the ZMQ_UNSUBSCRIBE option shall remove only one instance, leaving the rest in place and functional.
How to enforce an ad-hoc, server-dictated, ZMQ_SUBSCRIBE restrictions?
This is possible via extending the messaging layer and adding a control-mode socket, that will carry server-initiated settings for the client ZMQ_SUB messages filtering.
Upon receiving a new, the server-dictated, ZMQ_SUBSCRIBE/ZMQ_UNSUBSCRIBE setting, the ZMQ_SUB client side code will simply handle that request and add zmq_setsockopt() accordingly.
FSA-driven grammars for this approach are rich of further possibilites, so will allow any Game Server / Game Community to smoothly go this way.
What pattern or socket I should use?
ZeroMQ is rather a library of LEGO-style elements to get assembled into a bigger picture.
Expecting such a smart library to have a one-size-fits-all ninja-element is on a closer look an oxymoron.
So, to avoid a "Never-ending-story" of adding "although this ... and also that ..."
Review all requirements and & list features for the end-to-end scaleable solution,
Design a messaging concept & validate it to meet all the listed requirements & cover all features in [1]
Implement [2]
Test [3] & correct it for meeting 1:1 the end-to-end specification [1]
Enjoy it. You have done it end-to-end right.

Omron PLC Ethernet card

I have an Ethernet card in a Omron PLC. Is there any way to do an automatic check to see if the Ethernet card is working? If not, is there a manual way? For example, if the card was to go out on the PLC it would give an error. But if the card just loses signal with the server then it would NOT give error. Any help on how to do this?
There are several types of errors you can check for. The way you do this depends on the type of error. Things you can check :
ETN unit Error Status (found at PLC CIO address CIO 1500 + (25 x unit number) +18)
What it reports : IP configuration, routing, DNS, mail, network services, etc, errors.
See : Manual Section 8-2
The ETN unit also keeps an internal error log (manual section 8-3) that you can read out to your HMI software (if you use it) using FINS commands. This documents all manner of errors internal to the ETN unit.
There are also other memory reservations in the PLC for CPU bus devices (like the ETN unit) which provide basic status flags you can include in ladder logic to raise alarms, etc. (See section 4-3 : Auxiliary Area Data).
These flags indicate whether the unit is initializing, for example, has initialized successfully, is ready to execute network commands, whether the last executed command completed OK or returned an error code (which can be read from the Error Log as above), etc. These can indicate whether the PLC is unable to properly communicate with the ETN device.
You can implement single byte location which will be autoincremented each second by the server. Then every few seconds you check in your PLC logic if old reading is the same as new reading, and if it is then you trigger an alarm that physical server (which is an communication client) is not communicating to PLC ethernet card.