Write an Array (of real) to an Siemens 1500 with OPC UA on WINDEV - opc-ua

(sorry for my french way to write english)
Hi,
After many research on net and doc, i can't resolve my problem
I try to write an array of 150 real to siemens with OPC UA :
PROCÉDURE WriteArrayToNode(NodeIdTxt is string)
Trace("lets write")
myNodeId is NodeId(gsNotreBD+"."+NodeIdTxt)
nodesToWrite is WriteValueCollection <- new WriteValueCollection()
nodeToWrite is WriteValue <- new WriteValue()
nodeToWrite.NodeId = myNodeId
nodeToWrite.AttributeId = Attributes.Value
Trace(Attributes.ValueRank,Attributes.Value,Attributes.ArrayDimensions)
vTemp is Opc.Ua.variant(gTab_W_PARA) //gTab_W_PARA is the array of 150 real i trying to write
clDataValue1 is DataValue <- new DataValue(vTemp);
nodeToWrite.Value=clDataValue1
nodeToWrite.IndexRange="1:150"
nodesToWrite.Add(nodeToWrite);
clRequestHeader is RequestHeader <- new RequestHeader()
myResults is StatusCodeCollection dynamic = new StatusCodeCollection
myDiagsInfo is DiagnosticInfoCollection dynamic = new DiagnosticInfoCollection
request is RequestHeader dynamic = new RequestHeader
gmySession.Write(request,nodesToWrite,myResults,myDiagsInfo)
Trace(myDiagsInfo.get_Count())
Trace("scribe ? ")
RETURN True
Trace(" yes scribe ")
CASE ERROR:
Trace("erreur",ExceptionInfo)
RETURN False
CASE EXCEPTION:
Trace("exception",ExceptionInfo)
RETURN False
The TRACE give :
lets write //start of process
15 13 16 // Attributes.ValueRank,Attributes.Value,Attributes.ArrayDimensions I don't understand :
// With doc, I admit it say Attributes is 15 dimensional array with value of 13 ? and 16
// index, but who is Attributes ? I never use array like that
0
scribe ? //it return true, so there is no error ?
But in siemens my array don't change, so the write not work ?
i try use variant and Datavalue like in many example, but i not sure its the good way.
if someone can help I thank him in advance

Probably your using of IndexRange is wrong.
Assuming that you only use an OneDimensional S7 Array REAL[1..150] (which the dataVariable indicates whith Attributes ValueRank and ArrayDimensions)
Do not use S7 ranges syntax with OPC UA IndexRange!
From the UA Spec:
"All indexes start with 0. The maximum value for any index is one less than the length of the dimension."
So "1:150" adresses a range, that doesn't exists on the server. "0:149" would be correct.
If you address the whole array, don't use an IndexRange at all. In fact, this will have some performance issues on servers.
Maybe S7-1500 OPC Server does not support IndexRange at all, when writing. But it should return an error.

Related

Pymodbus: write_coil also writes in discrete input register?

I have a question regarding the pymodbus module and its functionality. My server works just fine, as well as the implemented datastore, I guess.
Here is the code for my server and datastore:
import asyncio
from pymodbus.datastore import (ModbusSequentialDataBlock,ModbusServerContext,ModbusSlaveContext,)
from pymodbus.server import StartTcpServer
from pymodbus.device import ModbusDeviceIdentification
from pymodbus.version import version
def run_full_sync_server():
print("Creating sequential Datastore.")
# Creating continuing Datablock without gaps.
datablock = ModbusSequentialDataBlock(0, [0]*20) # Initializing starting address and registers with value 0 each
print("Creating slave Context.")
# Creating one slave context. Allows the server to respond to slave context with its unit id.
slave_context = ModbusSlaveContext(
di=datablock, # 'di' - Discrete Input Initializer
co=datablock, # 'co' - Coils Initializer
hr=datablock, # 'hr' - Holding Register Initializer
ir=datablock, # 'ir' - Input Registers Initializer
unit=1, # SlaveID
zero_mode=False
)
print("Creating Server Identity.")
# Creates Server Information.
identity = ModbusDeviceIdentification(
info_name={
"VendorName": "XXX",
"ProductCode": "ModbusKassow",
"VendorUrl": "XXX",
"ProductName": "Synchronous Py",
"ModelName": "Pymodbus Server",
"MajorMinorRevision": version.short(),
}
)
print("Starting Server.")
server_context = ModbusServerContext(slaves=slave_context, single=True)
server = StartTcpServer(context=server_context, identity=identity, address=("0.0.0.0", 502))
#return server
if name == "main":
asyncio.create_task(run_full_sync_server())
I am working with spyder atm, therfore I need a seperate instance for asyncio to get the server running.
As for writing coils/registers, there are some functions given from the module, but whenever I use the function
write_coil
the exact values appear in my discrete inputs datastore (run the script two times).
from pymodbus.client import ModbusTcpClient
client = ModbusTcpClient('127.0.0.1', port=502)
discrete_inputs = client.read_discrete_inputs(0, 14) # start reading from address 0
discrete_inputs.setBit(3, 1) # set address 3 to value 1
discrete_inputs.setBit(1,1)
print(discrete_inputs.getBit(0))
print(discrete_inputs.getBit(1))
print(discrete_inputs.bits[:])
print()
client.write_coil(10, True)
client.write_coil(12, True)
client.write_coil(15, True)
reading_coil = client.read_coils(0, 14) # start reading from address 1
print(reading_coil.bits[:])
I thought that the discrete inputs and discrete coils were seperate datastores because they were initialized seperately?
Also, why does the
discrete_inputs.bits[:]
returns a list of only 16 bool values because in the server script I initialized the datastore with a list of 20 values?
Whenever I run the script twice, my coil values will appear in the discrete input datastore:
[False, True, False, True, False, False, False, False, False, False, True, False, True, False, False, False]
I know that each register/coil can only store 16 bits, which corresponds to the list of 16 values I get from the read function.
I tried the code above and were expecting that the datastores were seperately.
Also I was assuming that maybe I am only writing in one register, because I get exactly 16bits returned.
If that's the case however, I do not know how to access the other coil addresses.
Edit: code formatting
I thought that the discrete inputs and discrete coils were separate datastores because they were initialized separately?
They are; but you are creating a single store and then assigning that same store to each element. Try something like (showing two ways of accomplishing this):
datablock = ModbusSequentialDataBlock(0, [0]*20)
datablock2 = ModbusSequentialDataBlock(0, [0]*20)
slave_context = ModbusSlaveContext(
di=datablock, # 'di' - Discrete Input Initializer
co=ModbusSequentialDataBlock(0, [0]*20), # 'co' - Coils Initializer
hr=datablock2, # 'hr' - Holding Register Initializer
ir=ModbusSequentialDataBlock(0, [0]*20), # 'ir' - Input Registers Initializer
unit=1, # SlaveID
zero_mode=False
)
I know that each register/coil can only store 16 bits, which corresponds to the list of 16 values I get from the read function.
I think you may be confusing the Modbus protocol and the PyModbus implementation. A Modbus coil is one bit (The PyModbus datastore may end up storing that one bit in a 16-bit variable but that is an implementation detail).
As per the docs:
The coils in the response message are packed as one coil per bit of the data field.
Status is indicated as 1= ON and 0= OFF. The LSB of the first data byte contains the output addressed in the query. The other coils follow toward the high order end of this byte, and from low order to high order in subsequent bytes.
If the returned output quantity is not a multiple of eight, the remaining bits in the final data byte will be padded with zeros (toward the high order end of the byte). The Byte Count field specifies the quantity of complete bytes of data.
You are requesting 14 registers (client.read_discrete_inputs(0, 14)) so the output will be rounded up to 2 bytes (16 bits).

Transferring arrays/classes/records between locales

In a typical N-Body simulation, at the end of each epoch, each locale would need to share its own portion of the world (i.e. all bodies) to the rest of the locales. I am working on this with a local-view approach (i.e. using on Loc statements). I encountered some strange behaviours that I couldn't make sense out of, so I decided to make a test program, in which things got more complicated. Here's the code to replicate the experiment.
proc log(args...?n) {
writeln("[locale = ", here.id, "] [", datetime.now(), "] => ", args);
}
const max: int = 50000;
record stuff {
var x1: int;
var x2: int;
proc init() {
this.x1 = here.id;
this.x2 = here.id;
}
}
class ctuff {
var x1: int;
var x2: int;
proc init() {
this.x1 = here.id;
this.x2 = here.id;
}
}
class wrapper {
// The point is that total size (in bytes) of data in `r`, `c` and `a` are the same here, because the record and the class hold two ints per index.
var r: [{1..max / 2}] stuff;
var c: [{1..max / 2}] owned ctuff?;
var a: [{1..max}] int;
proc init() {
this.a = here.id;
}
}
proc test() {
var wrappers: [LocaleSpace] owned wrapper?;
coforall loc in LocaleSpace {
on Locales[loc] {
wrappers[loc] = new owned wrapper();
}
}
// rest of the experiment further down.
}
Two interesting behaviours happen here.
1. Moving data
Now, each instance of wrapper in array wrappers should live in its locale. Specifically, the references (wrappers) will live in locale 0, but the internal data (r, c, a) should live in the respective locale. So we try to move some from locale 1 to locale 3, as such:
on Locales[3] {
var timer: Timer;
timer.start();
var local_stuff = wrappers[1]!.r;
timer.stop();
log("get r from 1", timer.elapsed());
log(local_stuff);
}
on Locales[3] {
var timer: Timer;
timer.start();
var local_c = wrappers[1]!.c;
timer.stop();
log("get c from 1", timer.elapsed());
}
on Locales[3] {
var timer: Timer;
timer.start();
var local_a = wrappers[1]!.a;
timer.stop();
log("get a from 1", timer.elapsed());
}
Surprisingly, my timings show that
Regardless of the size (const max), the time of sending the array and record strays constant, which doesn't make sense to me. I even checked with chplvis, and the size of GET actually increases, but the time stays the same.
The time to send the class field increases with time, which makes sense, but it is quite slow and I don't know which case to trust here.
2. Querying the locales directly.
To demystify the problem, I also query the .locale.id of some variables directly. First, we query the data, which we expect to live in locale 2, from locale 2:
on Locales[2] {
var wrappers_ref = wrappers[2]!; // This is always 1 GET from 0, okay.
log("array",
wrappers_ref.a.locale.id,
wrappers_ref.a[1].locale.id
);
log("record",
wrappers_ref.r.locale.id,
wrappers_ref.r[1].locale.id,
wrappers_ref.r[1].x1.locale.id,
);
log("class",
wrappers_ref.c.locale.id,
wrappers_ref.c[1]!.locale.id,
wrappers_ref.c[1]!.x1.locale.id
);
}
And the result is:
[locale = 2] [2020-12-26T19:36:26.834472] => (array, 2, 2)
[locale = 2] [2020-12-26T19:36:26.894779] => (record, 2, 2, 2)
[locale = 2] [2020-12-26T19:36:27.023112] => (class, 2, 2, 2)
Which is expected. Yet, if we query the locale of the same data on locale 1, then we get:
[locale = 1] [2020-12-26T19:34:28.509624] => (array, 2, 2)
[locale = 1] [2020-12-26T19:34:28.574125] => (record, 2, 2, 1)
[locale = 1] [2020-12-26T19:34:28.700481] => (class, 2, 2, 2)
Implying that wrappers_ref.r[1].x1.locale.id lives in locale 1, even though it should clearly be on locale 2. My only guess is that by the time .locale.id is executed, the data (i.e. the .x of the record) is already moved to the querying locale (1).
So all in all, the second part of the experiment lead to a secondary question, whilst not answering the first part.
NOTE: all experiment are run with -nl 4 in chapel/chapel-gasnet docker image.
Good observations, let me see if I can shed some light.
As an initial note, any timings taken with the gasnet Docker image should be taken with a grain of salt since that image simulates the execution across multiple nodes using your local system rather than running each locale on its own compute node as intended in Chapel. As a result, it is useful for developing distributed memory programs, but the performance characteristics are likely to be very different than running on an actual cluster or supercomputer. That said, it can still be useful for getting coarse timings (e.g., your "this is taking a much longer time" observation) or for counting communications using chplvis or the CommDiagnostics module.
With respect to your observations about timings, I also observe that the array-of-class case is much slower, and I believe I can explain some of the behaviors:
First, it's important to understand that any cross-node communications can be characterized using a formula like alpha + beta*length. Think of alpha as representing the basic cost of performing the communication, independent of length. This represents the cost of calling down through the software stack to get to the network, putting the data on the wire, receiving it on the other side, and getting it back up through the software stack to the application there. The precise value of alpha will depend on factors like the type of communication, choice of software stack, and physical hardware. Meanwhile, think of beta as representing the per-byte cost of the communication where, as you intuit, longer messages necessarily cost more because there's more data to put on the wire, or potentially to buffer or copy, depending on how the communication is implemented.
In my experience, the value of alpha typically dominates beta for most system configurations. That's not to say that it's free to do longer data transfers, but that the variance in execution time tends to be much smaller for longer vs. shorter transfers than it is for performing a single transfer versus many. As a result, when choosing between performing one transfer of n elements vs. n transfers of 1 element, you'll almost always want the former.
To investigate your timings, I bracketed your timed code portions with calls to the CommDiagnostics module as follows:
resetCommDiagnostics();
startCommDiagnostics();
...code to time here...
stopCommDiagnostics();
printCommDiagnosticsTable();
and found, as you did with chplvis, that the number of communications required to localize the array of records or array of ints was constant as I varied max, for example:
locale
get
execute_on
0
0
0
1
0
0
2
0
0
3
21
1
This is consistent with what I'd expect from the implementation: That for an array of value types, we perform a fixed number of communications to access array meta-data, and then communicate the array elements themselves in a single data transfer to amortize the overheads (avoid paying multiple alpha costs).
In contrast, I found that the number of communications for localizing the array of classes was proportional to the size of the array. For example, for the default value of 50,000 for max, I saw:
locale
get
put
execute_on
0
0
0
0
1
0
0
0
2
0
0
0
3
25040
25000
1
I believe the reason for this distinction relates to the fact that c is an array of owned classes, in which only a single class variable can "own" a given ctuff object at a time. As a result, when copying the elements of array c from one locale to another, you're not just copying raw data, as with the record and integer cases, but also performing an ownership transfer per element. This essentially requires setting the remote value to nil after copying its value to the local class variable. In our current implementation, this seems to be done using a remote get to copy the remote class value to the local one, followed by a remote put to set the remote value to nil, hence, we have a get and put per array element, resulting in O(n) communications rather than O(1) as in the previous cases. With additional effort, we could potentially have the compiler optimize this case, though I believe it will always be more expensive than the others due to the need to perform the ownership transfer.
I tested the hypothesis that owned classes were resulting in the additional overhead by changing your ctuff objects from being owned to unmanaged, which removes any ownership semantics from the implementation. When I do this, I see a constant number of communications, as in the value cases:
locale
get
execute_on
0
0
0
1
0
0
2
0
0
3
21
1
I believe this represents the fact that once the language has no need to manage the ownership of the class variables, it can simply transfer their pointer values in a single transfer again.
Beyond these performance notes, it's important to understand a key semantic difference between classes and records when choosing which to use. A class object is allocated on the heap, and a class variable is essentially a reference or pointer to that object. Thus, when a class variable is copied from one locale to another, only the pointer is copied, and the original object remains where it was (for better or worse). In contrast, a record variable represents the object itself, and can be thought of as being allocated "in place" (e.g., on the stack for a local variable). When a record variable is copied from one locale to the other, it's the object itself (i.e., the record's fields' values) which are copied, resulting in a new copy of the object itself. See this SO question for further details.
Moving on to your second observation, I believe that your interpretation is correct, and that this may be a bug in the implementation (I need to stew on it a bit more to be confident). Specifically, I think you're correct that what's happening is that wrappers_ref.r[1].x1 is being evaluated, with the result being stored in a local variable, and that the .locale.id query is being applied to the local variable storing the result rather than the original field. I tested this theory by taking a ref to the field and then printing locale.id of that ref, as follows:
ref x1loc = wrappers_ref.r[1].x1;
...wrappers_ref.c[1]!.x1.locale.id...
and that seemed to give the right result. I also looked at the generated code which seemed to indicate that our theories were correct. I don't believe that the implementation should behave this way, but need to think about it a bit more before being confident. If you'd like to open a bug against this on Chapel's GitHub issues page, for further discussion there, we'd appreciate that.

Data ranged subscribe strange behavior

I was playing with swift's Data in the following a small code:
var d = Data(count: 10)
d[5] = 3
let d2 = d[5..<8]
print("\(d2[0])")
To my surprise, this code throws exception on print() while the following code does not:
var d = Data(count: 10)
d[5] = 3
let d2 = d.subdata(in: 5..<8)
print("\(d2[0])")
I somehow understand why this happens, but I don't get why this is designed like this. When I use subdata() I get a whole copy of range, so indexing is valid from 0. But when I use range subscribe [], I get access to the requested range while indexing is the same as before. So in my first example d2[5] is 3.
But I wonder why it is designed like this? I don't want to make a copy of my data by using subdata() method. I just wanted to access a portion of my data with better indexing.
This is especially creates unexpected behaviors if you pass it to a function. For example, following code creates unexpected results and exceptions and you may not find out easily why:
func testit(idata: Data) {
if idata.count > 0 {
print("\(idata.count)")
print("\(idata[0])")
}
}
//...
var d = Data(count: 10)
d[5] = 3
let d2 = d[5..<8]
testit(idata: d2)
This code is really strange. Because if you debug your code, you see that print("\(idata.count)") prints 3 as size of idata which is correct, but accessing it with idata[0] creates exception.
Is there any reason for this design? I was expecting that I could access resulting Data from subscribe starting index 0 while it is not true. Can I do this without using subdata() which creates copy of data or using additional arguments to pass base of data slice?
d[5..<8] returns Data.Slice – which happens to be Data. Generally, slices share the indices with their base collection, as documented in Slice.
One possible reason for this design decision is that it guarantees that subscripting a slice is a O(1) operation (adding an offset for accessing the base collection is not necessarily O(1), e.g. not for strings.)
It is also convenient, as in this example to locate the text after the second occurrence of a character in a string:
let string = "abcdefgabcdefg"
// Find first occurrence of "d":
if let r1 = string.range(of: "d") {
// Find second occurrence of "d":
if let r2 = string[r1.upperBound...].range(of: "d") {
print(string[r2.upperBound...]) // efg
}
}
As a consequence, you must never assume that the indices of a collection are zero-based (unless documented, as for Array.startIndex). Use startIndex to get the first index, or first to get the first element.

How to overcome indefinite matrix error (NbClust)?

I'm getting the following error when calling NbClust():
Error in NbClust(data = ds[, sapply(ds, is.numeric)], diss = NULL, distance = "euclidean", : The TSS matrix is indefinite. There must be too many missing values. The index cannot be calculated.
I've called ds <- ds[complete.cases(ds),] just before running NbClust so there's no missing values.
Any idea what's behind this error?
Thanks
I had same issue in my research.
So, I had mailed to Nadia Ghazzali, who is the package maintainer, and got an answer.
I'll attached my mail and her reply.
my e-mail:
Dear Nadia Ghazzali. Hello Nadia. I have some questions about
NbClust function in R library. I have tried googling but could not
find satisfying answers. First, I’m so grateful for you to making
this awsome R library. It is very helpful for my reasearch. I tested
NbClust function in NbClust library with my own data like below.
> clust <- NbClust(data, distance = “euclidean”,
min.nc = 2, max.nc = 10, method = ‘kmeans’, index =”all”)
But soon, an error has occurred. Error: division by zero! Error in
Indices.WBT(x = jeu, cl = cl1, P = TT, s = ss, vv = vv) : object
'scott' not found So, I tried NbClust function line by line and
found that some indices, like CCC, Scott, marriot, tracecovw,
tracew, friedman, and rubin, were not calculated because of object
vv = 0. I’m not very familiar with argebra so I don’t know meaning
of eigen value. But it seems to me that object ss(which is squart of
eigenValues) should not be 0 after prodected.
So, here is my questions.
I assume that my data is so sparse(a lot of zero values) that sqrt(eigenValues) becomes too small, is that right? I’m sorry I
can’t attach my data but I can attach some part of eigenValues and
squarted eigenValues.
> head(eigenValues)
[1] 0.039769880 0.017179826 0.007011972 0.005698736 0.005164871 0.004567238
> head(sqrt(eigenValues))
[1] 0.19942387 0.13107184 0.08373752 0.07548997 0.07186704 0.06758134
And if my assume is right, what can I do for this problems? Only one
way to drop out 7 indices?
Thank you for reading and I’ll waiting your reply. Best regards!
and her reply:
Dear Hansol,
Thank you for your interest. Yes, your understanding is good.
Unfortunately, the seven indices could not be applied.
Best regards,
Nadia Ghazzali
#seni The cause of this error is data related. If you look at the source code of this function,
NbClust <- function(data, diss="NULL", distance = "euclidean", min.nc=2, max.nc=15, method = "ward", index = "all", alphaBeale = 0.1)
{
x<-0
min_nc <- min.nc
max_nc <- max.nc
jeu1 <- as.matrix(data)
numberObsBefore <- dim(jeu1)[1]
jeu <- na.omit(jeu1) # returns the object with incomplete cases removed
nn <- numberObsAfter <- dim(jeu)[1]
pp <- dim(jeu)[2]
TT <- t(jeu)%*%jeu
sizeEigenTT <- length(eigen(TT)$value)
eigenValues <- eigen(TT/(nn-1))$value
for (i in 1:sizeEigenTT)
{
if (eigenValues[i] < 0) {
print(paste("There are only", numberObsAfter,"nonmissing observations out of a possible", numberObsBefore ,"observations."))
stop("The TSS matrix is indefinite. There must be too many missing values. The index cannot be calculated.")
}
}
And I think the root cause of this error is the negative eigenvalues that seep in when the number of clusters is very high, i.e. the max.nc is high. So to solve the problem, you must look at your data. See if it got more columns then rows. Remove missing values, check for issues like collinearity & multicollinearity, variance, covariance etc.
For the other error, invalid clustering method, look at the source code of the method here. Look at line number 168, 169 in the given link. You are getting this error message because the clustering method is empty. if (is.na(method))
stop("invalid clustering method")

ASN1 parsing with swift

I think I get the basic idea behind ASN.1 parsing. Walk the bytes, interpret them and do something useful with them. Alas I am stuck on the implementation.
Apple has no sample code (that I could find), probably for security reasons. OpenSSL is not very well documented, so I can only guess at what the functions actually do. The only sample code in swift I did find doesn't handle case 17 (the in-app purchases), which is the one thing I am interested in.
I tried figuring out where in the data stream the pointer is located, but I always get the same nonsensical result of 49.
let receiptContents = NSData(contentsOfURL: receiptLocation)!
let receiptBIO = BIO_new(BIO_s_mem())
BIO_write(receiptBIO, receiptContents.bytes, Int32(receiptContents.length))
contents = d2i_PKCS7_bio(receiptBIO, nil)
//parsing
var currentIndex = UnsafePointer<UInt8>()
var endIndex = UnsafePointer<UInt8>()
let octets = pkcs7_d_data(pkcs7_d_sign(self.contents).memory.contents)
var ptr = UnsafePointer<UInt8>(octets.memory.data)
let end = ptr.advancedBy(Int(octets.memory.length))
println(ptr.memory) //always 49 ???
println(end.memory) //always 0 ???
println(octets.memory.length)
I tried parsing the NSData myself, but well, what is the type of the binaire data?
receiptContents = NSData(contentsOfURL: receiptLocation)!
//get bytes
let count = receiptContents.length / sizeof(UInt8)
var bytes = [UInt8](count: count, repeatedValue: 0)
receiptContents.getBytes(&bytes, length:count * sizeof(UInt8))
//parsing
for index in 0...5
{
let value = Int(bytes[index])
println(value)
}
I get this output:
48
130
21
57
6
9
but if understand the ASN.1 format correctly, it's supposed to start with a value of 17 (set), then 3 bytes for the length (Int24?), then a value of 16 (first sequence), sequence length (1 byte), sequence type (1 byte), sequence payload, (repeat).
Other types like Int32, Int16 makes even less sense to me.
Not sure how to proceed here. Any suggestions?
From the outset, I suppose I should disclaim this by saying I'm not very familiar with many of the underlying technologies (Swift, OpenSSL, the Biometrics standards that I think you're working with).
That said, you're probably running afoul of BER's tagging rules. The Wiki article on X.690 has some introductory comments about how BER tags are constructed, but really you'll want to consult Annex A of X.690 for an example encoding and X.680 §8 for information about tagging.
A SET type can appear in several different forms; but in your case 49 = 0x31 = 0b00110001 = UNIVERSAL 17 (SET, constructed). Other forms may occur, but this is the only one that's explicitly identified as a SET from the tag itself: the specification may use a different tag for a SET type.