Q: OPC UA location of sensor data - opc-ua

I have done some research on OPC UA and noticed that all sensor data on the Prosys sample server is stored in subfolders of the Object (i=85) folder.
On the OPC UA server of a machine I have seen that the sensor data like the measured value, the unit etc. can ONLY be accessed via the Types (i=86) folder.
The path here would be i=84 -> i=86 -> i=88 -> i=58...
There is really no other path through which you can reach these nodes otherwise.
I have never seen such an implementation.Is this normal that such data is also stored in the Types folder or are there any guidlines that forbid this?
The machine is also a bit older.
Thanks for your help
UPDATE:
the further path of i=58 looks like this, where --(i=45)-> symbolizes the Referencetype from the previous to the following node (in this case i=45, HasSybtype) and the word in the parentheses next to the NodeId is the NodeClass.
i=58 --(i=45)-> ns=2;i=1(ObjectType) --(i=35)-> ns=2;i=2(Object)
--(i=35)-> ns=2;i=3(Object) --(i=47)-> ns=2;s=#setPressure(Variable) --(i=46)-> ns=2;i=5(Variable)
ns=2;s=#setPressure contains the value 250.0 and ns=2;i=5 an Engineering unit

This is not normal. It sounds like a bad implementation done by somebody who didn’t know any better.
Depending on the reference types they used to build this structure you could argue it is forbidden. DataType Nodes should only be the source of HasProperty, HasSubtype, and HasEncoding references.
edit: The path you mention is Root -> Types -> ObjectTypes -> BaseObjectType. Are you sure the Nodes you're finding under here are Variable Nodes with values or are you just seeing additional types defined by this server?

Related

NodeId as string in ModelCompiler OPC UA

I am trying to develop a OPC UA server on my own, but since I am quite a newbie in coding, it is quite hard for me.
I have started from the QuickstartApplication found here: https://github.com/OPCFoundation/UA-.NET-Legacy
in particular I edit the ModelDesign.xml file to customize it as I wish
https://github.com/OPCFoundation/UA-.NET-Legacy/blob/master/ComIOP/Common/Common/ModelDesign.xml
I would like to define some nodes with NodeId as string (all the NodeId in the ModelDesign.xml in the example are numeric)
Following this xsd, I have found "StringId" and "NumericId" that look like what was looking for
https://github.com/OPCFoundation/UA-ModelCompiler/blob/master/ModelCompiler/UA%20Model%20Design.xsd
but changing their value in ModelDesign.xml does nothing about the NodeId. There is no error, just the compiler assigns new NodeIds (all numeric) as if it does not consider the changes I have made.
As a compiler, I am using the ModelCompiler found on GitHub
https://github.com/OPCFoundation/UA-ModelCompiler
Can somebody help me, please? How can I customize the NodeId of the nodes?
Thank you
Edo
the best suggestion that I can offer at this stage is to clone the UA-.NETStandard and run the NetCoreConsoleServer in
UA-.NETStandard/SampleApplications/Samples/NetCoreConsoleServer
through the debugger. The boiler node manager, if my memory serves me well, uses stringIDs. The Interface INodeIdFactory in ISystemContext.cs offers some insight in how ID's are generated.
IMHO, the model designer has no switch to enforce string ID's as you know. So you'll need to programmatically allocate stringID's rather than numeric ID's to nodes upon server boot. I haven't figured it out yet either.
So, you may set breakpoints in the BoilerNodeManager.cs and see how the nodeID is actually constructed.

Read YAML config through Rest API

I have a really complicated system which use multiple languages and frameworks (Java Python Scala Bash). In each module I need to retrieve configuration values which are similar and change frequently. Currently I'm maintaining multiple conf files which holds lots of duplicates.
I wonder if there is out of the box RestAPI which can retrieve variables by demand from remote location.
All I manage to find by now are ways to load the entire file from remote source which is half a solution from me:
YAML.parse(open('https://link_to_file/file.yaml'))
My goal, which I fail to find a lead to it, is to make a direct call.
MyRemoteAPI.get("level1.level2.x")
P.S
YAML is not mandatory solution for me, I'm Open for suggestions.
I don't know about an out-of-the-box API, but it's fairly trivial to build. Make a service that will read the YAML file and traverse to the appropriate key. e.g. using a dynamic language like Ruby (+Rails), you could do something like
def value
config = YAML.load_file '/local/path/to/config.yaml'
render plain: config.dig(params[:key].split('.'))
end
dig essentially traverses a structure and safely returns nil if a key isn't found, so this returns the value at the "leaf" of the requested path.
You might also want to cache the structure in memory to prevent constantly reading from the file, e.g. could do something like ##config ||= YAML.parse(open('https://link_to_file/file.yaml')) or config = Rails.cache.fetch('config', expire_in: 1.hour) { ... }. And/or cache the API's HTTP response.

Puppet Class: define a variable which list all files in a directory

I'm defining my own Puppet class, and I was wondering if it is possible to have an array variable which contains a list of all files in a specific directory. I was wondering to have a similar syntax like below, but didn't found a way to make it work.
$dirs = Dir.entries('C:\\Program Files\\Java\\')
Does anyone how to do it in a Puppet file?
Thanks!
I was wondering if it is possible to have an array variable which contains a list of all files in a specific directory.
Information about the current state of the machine to be configured is conveyed to the catalog compiler via facts. These are available to your classes as top-scope variables, and Puppet (or Facter, actually) provides ways to define your own custom facts. That's a link into the Facter 3 manual, but similar applies to earlier versions. Do not overlook the rest of the Facter documentation, which has more relevant information on this topic.
On the other hand, information about the machine providing catalog-building services -- the master in a master / agent setup -- can be obtained by writing and calling a custom function. This is rarely what you actually want, but it's worth mentioning because you might one day want a custom function for some other purpose.

Get contents of a scala List in a running process?

I have a running Scala process and want to get the contents of a List in that process. I know the PID of the process and I know the name of the List[String], and I have taken a heap dump with VisualVM. Is there a way for me to find the actual contents of that specific list and save it somewhere?
If the List[String] instance is referenced by a class (for example via a val) than you can look for the class that holds it.
You can download Eclipse Memory Analyzer (MAT) and open your heap dump.
You can then click the 'Dominator tree' and in the top of the table you can type the name of the class that holds your list.
If you have found the class right-click it and select 'List objects -> With Incoming References`, that should give you the instances of the class that all could potentially hold the list.
Right click one of the instances and select 'List objects -> With Outgoing References', that should give you a tree structure where you would find your list
Note that once you find your list, you can check out the panel on the left (the inspector panel), that contains readable information.
Note: the above steps are from the top of my head, so they might not be completely accurate. This should however give you a good sense of direction.
Good luck!
I'm sure in principle it is possible, but surely there's nothing simple or straightforward and off-the-shelf to allow you to do so.
I'd probably go with using the Java Platform Debugger Architecture (JPDA) and its Java Debugging Wire Protocol (JDWP) to get at the raw information you'd need. From there you can use Java and / or Scala reflection to discover what to query in the target JVM.
I don't know how much of this is applicable to heap dumps. In the old days, the C / Unix debugging tools could operate on either core dumps or active processes.

How to handle environment-specific application configuration organization-wide?

Problem
Your organization has many separate applications, some of which interact with each other (to form "systems"). You need to deploy these applications to separate environments to facilitate staged testing (for example, DEV, QA, UAT, PROD). A given application needs to be configured slightly differently in each environment (each environment has a separate database, for example). You want this re-configuration to be handled by some sort of automated mechanism so that your release managers don't have to manually configure each application every time it is deployed to a different environment.
Desired Features
I would like to design an organization-wide configuration solution with the following properties (ideally):
Supports "one click" deployments (only the environment needs to be specified, and no manual re-configuration during/after deployment should be necessary).
There should be a single "system of record" where a shared environment-dependent property is specified (such as a database connection string that is shared by many applications).
Supports re-configuration of deployed applications (in the event that an environment-specific property needs to change), ideally without requiring a re-deployment of the application.
Allows an application to be run on the same machine, but in different environments (run a PROD instance and a DEV instance simultaneously).
Possible Solutions
I see two basic directions in which a solution could go:
Make all applications "environment aware". You would pass the environment name (DEV, QA, etc) at the command line to the app, and then the app is "smart" enough to figure out the environment-specific configuration values at run-time. The app could fetch the values from flat files deployed along with the app, or from a central configuration service.
Applications are not "smart" as they are in #1, and simply fetch configuration by property name from config files deployed with the app. The values of these properties are injected into the config files at deploy-time by the install program/script. That install script takes the environment name and fetches all relevant configuration values from a central configuration service.
Question
How would/have you achieved a configuration solution that solves these problems and supports these desired features? Am I on target with the two possible solutions? Do you have a preference between those solutions? Also, please feel free to tell me that I'm thinking about the problem all wrong. Any feedback would be greatly appreciated.
We've all run into these kinds of things, particularly in large organizations. I think it's most important to manage your own expectations first, and also ask whether it's really necessary to tell every system and subsystem on a given box to "change to DEV mode" or "change to PROD mode". My personal recommendation is as follows:
Make individual boxes responsible for a different stage - i.e. "this is a DEV box", and "this is a PROD box".
Collect as much of the configuration that differs from box to box in one location, even if it requires soft links or scripts that collect the information to then print out.
A. This way, you can easily "dump this box's configuration" in two places and see what differs, for example after a new deployment.
B. You can also make configuration changes separate from software changes, at least to some degree, which is a good way to root out bugs that happen at release time.
Then have everything base its configuration on something/somewhere that is not baked-in or hard-coded - just make sure to collect and document it in that one location. It almost doesn't matter what the mechanism is, which is a good thing, because some systems just don't want to be forced to use some mechanisms or others.
Sorry if this is too general an answer - the question was very general. I've worked in several large software-based organizations before, and this seemed to be the best approach. Using a standalone server as "one unit of deployment" is the most realistic scenario (though sometimes its expensive), since applications affect each other, and no matter how careful you are, you destabilize a whole system when you move any given gear or cog.
The alternative gets very complex very quickly. You need to start rewriting the applications that you have control over in order to have them accept a "DEV" switch, and you end up adding layers of kludge to the ones you don't have control over. Usually, the ones you don't have control over at least base their properties on something defined on a system-wide level, unless they are "calling the mothership for instructions".
It's easier to redirect people to a remote location and have them "use DEV" vs "use PROD" than it is to "make this machine run like DEV" vs "make this machine run like PROD". And if you're mixing things up, like having a DEV task run together on the same box as a PROD task, then that's not a realistic scenario anyways: I guarantee that eventually you will be granting illegal DEV-only access to somebody on PROD, and you'll have a DEV task wipe out a PROD database.
Hope this helps. Let me know if you'd like to discuss more specifics involved.
I personally prefer solution 2 (the app should know itself, by its configuration, what environment it is running in). With solution 1 (pass the environment name as a startup parameter) the danger of using the wrong environment specifier is much too high. Accessing the TEST database from PROD code and vice versa may cause mayhem, if the two installed code bases are not of the same version, as is often the case.
My current project uses solution 1, but I don't like that. A previous project I worked on used a variation of solution 2: The build process generated one setup file for every environment, making sure that they contained the same code base but appropriate configuration paramters. That worked like a charm, but I know it contradicts the paradigm that the "exact same build files must be deployed everywhere".
I think I have asked a related, self-answered, question, before I read this one : How to organize code so that we can move and update it without having to edit the location of the configuration file? . So, on that basis, I provide an answer here. I don't like the idea of "smart" application (solution 1 here) for such a simple task as finding environment settings. It seems a complicated framework for something that should be simple. The idea of an install script (solution 2 here) is powerful, but it is useful to allow the user to change the content of the config file, but would it allow to change the location of this config file? What is this "central configuration service", where is it located? My answer is that I would go with option 2, if the goal is to set the content of the configuration file, but I feel that the issue of the location of this configuration file remains unanswered here.
If you're using JSON to store/transmit configuration (or can use JSON in your pre-deploy process to output to some other format) you can annotate key/property names for environment/context-specific values with arbitrary or environment-specific suffixes, and then dynamically prefer/discriminate them at build/deploy/run/render -time, while leaving un-annotated properties alone.
We have used this to avoid duplicating entire configuration files (with the associated problems well known) AND to reduce repetition. The technique is also perfect for internationalization (i18n) -- even within the same file, if desired.
Example, snippet of pre-processed JSON config:
var config = {
'ver': '1.0',
'help': {
'BLURB': 'This pre-production environment is not supported. Contact Development Team with questions.',
'PHONE': '808-867-5309',
'EMAIL': 'coder.jen#lostnumber.com'
},
'help#www.productionwebsite.com': {
'BLURB': 'Please contact Customer Service Center',
'BLURB#fr': 'S\'il vous plaît communiquer avec notre Centre de service à la clientèle',
'BLURB#de': 'Bitte kontaktieren Sie unseren Kundendienst!!1!',
'PHONE': '1-800-CUS-TOMR',
'EMAIL': 'customer.service#productionwebsite.com'
},
}
... and post-processed (in this case, at render time) given dynamic, browser-environment-known location.hostname='www.productionwebsite.com' and navigator.language of 'de'):
prefer(config,['www.productionwebsite.com','de']); // prefer(obj,string|Array<string>)
JSON.stringify(config); // {
'ver': '1.0',
'help': {
'BLURB': 'Bitte kontaktieren Sie unseren Kundendienst!!1!',
'PHONE': '1-800-CUS-TOMR',
'EMAIL': 'customer.service#productionwebsite.com'
}
}
If a non-annotated ('base') property has no competing annotated property, it is left alone (presumably global across environments) otherwise its value is replaced by an annotated value, if the suffix matches one of the inputs to the preference/discrimination function. Annotated properties that do not match are dropped entirely.
You can mix and match this behaviour to annotate configuration to achieve distinctions of global, default, specific that are (assuming you're sensible) readable with zero/minimal duplication.
The single, recursive prefer() function (as we're calling it, lacking the need or desire to make an entire project/framework out of it) we've developed so far (see jsFiddle, with inline docs) goes a bit further than this simple example, and (explained in greater detail here) handles deeply-nested configuration objects, as well as preferential ordering and (if you need to stay flat) combination of suffixes.
The function relies on JS ability to reference object properties as strings, dynamically, and tolerate # and & delimiters in property names which are not valid in dot-notation syntax but consequently (help) prevent developers from breaking this technique by accidentally referring to pre-processed/annotated attributes in code (unless they, non-conventionally don't prefer to use dot-notation.)
We have yet to have this break anything for us, nor have we been schooled on any fundamental flaws of this technique, beyond irresponsible/unintended usage or investment/fondness for existing frameworks/techniques that pre-exist. We have also not profiled it for performance (we only tend to run this once per build/session, etc.) so in your own usage, YMMV.
Most configurations transmitted client-side of course would not want to contain sensitive pre-production values, so one could (should!) use the same function to generate a production-only version (with no annotations) in pre-deploy, while still enjoying a SINGLE configuration file upstream in your process.
Further, if you're doing this for i18n, you may not want the entire wad going over the wire, so could process it server-side (cached or live, etc.) or pre-process it in build/deploy by splitting into separate files, but STILL enjoying a single source of truth as early in your workflow as possible.
We have not explored implementing the same function in Java (or C#, PERL, etc.) assuming it's even possible (with some exotic reflection maybe?) but a build environment that includes NodeJS could farm that step out easily.
Well if it suits your needs and you have no problem of storing the connection strings in the source control repository, you could create files like:
appsettings.dev.json
appsettings.qa.json
appsettings.staging.json
And choose the right one in the deployment script and rename it to the actual appsettings.json, which is then read by your app.